Sep 12 17:12:58.883794 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:12:58.883824 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:12:58.883863 kernel: KASLR enabled Sep 12 17:12:58.883869 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 12 17:12:58.883875 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 12 17:12:58.883880 kernel: random: crng init done Sep 12 17:12:58.883888 kernel: ACPI: Early table checksum verification disabled Sep 12 17:12:58.883893 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 12 17:12:58.883900 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:12:58.883907 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883914 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883920 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883926 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883932 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883939 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883947 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883953 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883960 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:12:58.883966 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:12:58.883972 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 12 17:12:58.883979 kernel: NUMA: Failed to initialise from firmware Sep 12 17:12:58.883985 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 17:12:58.883991 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 12 17:12:58.883997 kernel: Zone ranges: Sep 12 17:12:58.884004 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:12:58.884012 kernel: DMA32 empty Sep 12 17:12:58.884018 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 12 17:12:58.884024 kernel: Movable zone start for each node Sep 12 17:12:58.884030 kernel: Early memory node ranges Sep 12 17:12:58.884037 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 12 17:12:58.884043 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 12 17:12:58.884050 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 12 17:12:58.884056 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 12 17:12:58.884063 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 12 17:12:58.884069 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 12 17:12:58.884075 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 12 17:12:58.884082 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 17:12:58.884090 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 12 17:12:58.884096 kernel: psci: probing for conduit method from ACPI. Sep 12 17:12:58.884103 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:12:58.884111 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:12:58.884118 kernel: psci: Trusted OS migration not required Sep 12 17:12:58.884125 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:12:58.884134 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:12:58.884140 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:12:58.884147 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:12:58.884154 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:12:58.884161 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:12:58.884168 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:12:58.884175 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:12:58.884181 kernel: CPU features: detected: Spectre-v4 Sep 12 17:12:58.884188 kernel: CPU features: detected: Spectre-BHB Sep 12 17:12:58.884195 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:12:58.884203 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:12:58.884210 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:12:58.884217 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:12:58.884223 kernel: alternatives: applying boot alternatives Sep 12 17:12:58.884231 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:12:58.884239 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:12:58.884245 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:12:58.884253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:12:58.884259 kernel: Fallback order for Node 0: 0 Sep 12 17:12:58.884266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 12 17:12:58.884273 kernel: Policy zone: Normal Sep 12 17:12:58.884281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:12:58.884288 kernel: software IO TLB: area num 2. Sep 12 17:12:58.884294 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 12 17:12:58.884302 kernel: Memory: 3882740K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213260K reserved, 0K cma-reserved) Sep 12 17:12:58.884309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:12:58.884316 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:12:58.884323 kernel: rcu: RCU event tracing is enabled. Sep 12 17:12:58.884330 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:12:58.884337 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:12:58.884344 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:12:58.884351 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:12:58.884359 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:12:58.884366 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:12:58.884372 kernel: GICv3: 256 SPIs implemented Sep 12 17:12:58.884379 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:12:58.884386 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:12:58.884392 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:12:58.884399 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:12:58.884406 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:12:58.884413 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:12:58.884420 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:12:58.884426 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 12 17:12:58.884433 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 12 17:12:58.884441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:12:58.884448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:12:58.884455 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:12:58.884462 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:12:58.884469 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:12:58.884475 kernel: Console: colour dummy device 80x25 Sep 12 17:12:58.884482 kernel: ACPI: Core revision 20230628 Sep 12 17:12:58.884490 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:12:58.884496 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:12:58.884503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:12:58.884512 kernel: landlock: Up and running. Sep 12 17:12:58.884519 kernel: SELinux: Initializing. Sep 12 17:12:58.884525 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:12:58.884533 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:12:58.884540 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:12:58.884547 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:12:58.884554 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:12:58.884561 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:12:58.884568 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 17:12:58.884587 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 17:12:58.884595 kernel: Remapping and enabling EFI services. Sep 12 17:12:58.884602 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:12:58.884609 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:12:58.884616 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:12:58.884623 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 12 17:12:58.884630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:12:58.884637 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:12:58.884644 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:12:58.884651 kernel: SMP: Total of 2 processors activated. Sep 12 17:12:58.884660 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:12:58.884667 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:12:58.884679 kernel: CPU features: detected: Common not Private translations Sep 12 17:12:58.884688 kernel: CPU features: detected: CRC32 instructions Sep 12 17:12:58.884695 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:12:58.884703 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:12:58.884710 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:12:58.884717 kernel: CPU features: detected: Privileged Access Never Sep 12 17:12:58.884725 kernel: CPU features: detected: RAS Extension Support Sep 12 17:12:58.884734 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:12:58.884741 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:12:58.884749 kernel: alternatives: applying system-wide alternatives Sep 12 17:12:58.884756 kernel: devtmpfs: initialized Sep 12 17:12:58.884764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:12:58.884771 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:12:58.884779 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:12:58.884788 kernel: SMBIOS 3.0.0 present. Sep 12 17:12:58.884795 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 12 17:12:58.884803 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:12:58.884811 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:12:58.884818 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:12:58.884826 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:12:58.884841 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:12:58.884849 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Sep 12 17:12:58.884857 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:12:58.884866 kernel: cpuidle: using governor menu Sep 12 17:12:58.884873 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:12:58.884881 kernel: ASID allocator initialised with 32768 entries Sep 12 17:12:58.884888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:12:58.884896 kernel: Serial: AMBA PL011 UART driver Sep 12 17:12:58.884903 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:12:58.884910 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:12:58.884917 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:12:58.884925 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:12:58.884934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:12:58.884942 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:12:58.884949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:12:58.884956 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:12:58.884964 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:12:58.884971 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:12:58.884979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:12:58.884986 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:12:58.884994 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:12:58.885002 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:12:58.885010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:12:58.885018 kernel: ACPI: Interpreter enabled Sep 12 17:12:58.885025 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:12:58.885032 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:12:58.885040 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:12:58.885047 kernel: printk: console [ttyAMA0] enabled Sep 12 17:12:58.885054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:12:58.885216 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:12:58.885303 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:12:58.885374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:12:58.885441 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:12:58.885507 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:12:58.885517 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:12:58.885524 kernel: PCI host bridge to bus 0000:00 Sep 12 17:12:58.885643 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:12:58.885721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:12:58.885782 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:12:58.885861 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:12:58.885949 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 17:12:58.886029 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 12 17:12:58.886099 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 12 17:12:58.886172 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 17:12:58.886250 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.886318 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 12 17:12:58.886397 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.886466 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 12 17:12:58.886539 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.886631 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 12 17:12:58.886710 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.886778 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 12 17:12:58.886879 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.886951 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 12 17:12:58.887027 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.887099 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 12 17:12:58.887175 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.887245 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 12 17:12:58.887321 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.887391 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 12 17:12:58.887466 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:12:58.887533 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 12 17:12:58.887634 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 12 17:12:58.887707 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 12 17:12:58.887788 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:12:58.889982 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 12 17:12:58.890083 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:12:58.890157 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 17:12:58.890248 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 17:12:58.890320 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 12 17:12:58.890400 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 17:12:58.890472 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 12 17:12:58.890541 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 12 17:12:58.890635 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 17:12:58.890709 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 12 17:12:58.890792 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 17:12:58.891029 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 12 17:12:58.891129 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 17:12:58.891200 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 12 17:12:58.891271 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 17:12:58.891358 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:12:58.891432 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 12 17:12:58.891500 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 12 17:12:58.891569 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 17:12:58.891690 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 12 17:12:58.891758 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 12 17:12:58.891823 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 12 17:12:58.891921 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 12 17:12:58.891989 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 12 17:12:58.892057 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 12 17:12:58.892129 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 17:12:58.892198 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 12 17:12:58.892266 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 12 17:12:58.892338 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 17:12:58.892405 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 12 17:12:58.892474 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 12 17:12:58.892543 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 17:12:58.892631 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 12 17:12:58.892701 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 12 17:12:58.892773 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 17:12:58.894953 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 12 17:12:58.895069 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 12 17:12:58.895151 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:12:58.895218 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 12 17:12:58.895284 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 12 17:12:58.895354 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:12:58.895419 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 12 17:12:58.895485 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 12 17:12:58.895556 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:12:58.895643 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 12 17:12:58.895717 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 12 17:12:58.895788 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 12 17:12:58.895887 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:12:58.895958 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 12 17:12:58.896023 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:12:58.896092 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 12 17:12:58.896163 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:12:58.896232 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 12 17:12:58.896300 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:12:58.896369 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 12 17:12:58.896437 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:12:58.896503 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 12 17:12:58.896570 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:12:58.896694 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 12 17:12:58.896763 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:12:58.897169 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 12 17:12:58.897268 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:12:58.897338 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 12 17:12:58.897406 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:12:58.897477 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 12 17:12:58.897550 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 12 17:12:58.897641 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 12 17:12:58.897713 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 17:12:58.897781 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 12 17:12:58.897862 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 17:12:58.897935 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 12 17:12:58.898002 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 17:12:58.898070 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 12 17:12:58.898145 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 17:12:58.898216 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 12 17:12:58.898284 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 17:12:58.898353 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 12 17:12:58.898421 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 17:12:58.898493 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 12 17:12:58.898560 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 17:12:58.898642 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 12 17:12:58.898716 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 17:12:58.898784 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 12 17:12:58.899313 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 12 17:12:58.899398 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 12 17:12:58.899475 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 12 17:12:58.899542 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:12:58.899659 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 12 17:12:58.899732 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:12:58.899807 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 17:12:58.899895 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 12 17:12:58.899963 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:12:58.900039 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 12 17:12:58.900109 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:12:58.900181 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 17:12:58.900246 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 12 17:12:58.900313 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:12:58.900388 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 17:12:58.900460 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 12 17:12:58.900528 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:12:58.900624 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 17:12:58.900702 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 12 17:12:58.900770 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:12:58.900936 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 17:12:58.901017 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:12:58.901086 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 17:12:58.901154 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 12 17:12:58.901221 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:12:58.901295 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 12 17:12:58.901372 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:12:58.901443 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 17:12:58.901533 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 12 17:12:58.901630 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:12:58.901712 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 12 17:12:58.901784 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 12 17:12:58.904955 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:12:58.905062 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 17:12:58.905149 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 12 17:12:58.905229 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:12:58.905320 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 12 17:12:58.905400 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 12 17:12:58.905477 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 12 17:12:58.905555 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:12:58.905688 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 17:12:58.905786 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 12 17:12:58.907884 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:12:58.908014 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:12:58.908086 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 17:12:58.908157 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 12 17:12:58.908225 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:12:58.908299 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:12:58.908367 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 12 17:12:58.908435 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 12 17:12:58.908512 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:12:58.908602 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:12:58.908668 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:12:58.908730 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:12:58.908811 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 17:12:58.908895 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 12 17:12:58.908959 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:12:58.909037 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 12 17:12:58.909100 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 12 17:12:58.909163 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:12:58.909234 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 12 17:12:58.909297 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 12 17:12:58.909359 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:12:58.909433 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 12 17:12:58.909496 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 12 17:12:58.909562 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:12:58.909698 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 12 17:12:58.909770 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 12 17:12:58.910201 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:12:58.911421 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 12 17:12:58.911509 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 12 17:12:58.911614 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:12:58.911730 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 12 17:12:58.911973 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 12 17:12:58.912053 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:12:58.912129 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 12 17:12:58.912193 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 12 17:12:58.912258 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:12:58.912335 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 12 17:12:58.912398 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 12 17:12:58.912463 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:12:58.912476 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:12:58.912484 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:12:58.912493 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:12:58.912501 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:12:58.912509 kernel: iommu: Default domain type: Translated Sep 12 17:12:58.912517 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:12:58.912524 kernel: efivars: Registered efivars operations Sep 12 17:12:58.912532 kernel: vgaarb: loaded Sep 12 17:12:58.912540 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:12:58.912550 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:12:58.912558 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:12:58.912566 kernel: pnp: PnP ACPI init Sep 12 17:12:58.912664 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:12:58.912677 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:12:58.912685 kernel: NET: Registered PF_INET protocol family Sep 12 17:12:58.912693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:12:58.912701 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:12:58.912713 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:12:58.912722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:12:58.912730 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:12:58.912738 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:12:58.912746 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:12:58.912754 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:12:58.912762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:12:58.912878 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 12 17:12:58.912891 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:12:58.912902 kernel: kvm [1]: HYP mode not available Sep 12 17:12:58.912910 kernel: Initialise system trusted keyrings Sep 12 17:12:58.912918 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:12:58.912926 kernel: Key type asymmetric registered Sep 12 17:12:58.912934 kernel: Asymmetric key parser 'x509' registered Sep 12 17:12:58.912942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:12:58.912949 kernel: io scheduler mq-deadline registered Sep 12 17:12:58.912957 kernel: io scheduler kyber registered Sep 12 17:12:58.912965 kernel: io scheduler bfq registered Sep 12 17:12:58.912975 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:12:58.913053 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 12 17:12:58.913127 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 12 17:12:58.913197 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.913271 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 12 17:12:58.913340 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 12 17:12:58.913411 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.913485 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 12 17:12:58.913554 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 12 17:12:58.913639 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.913715 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 12 17:12:58.913785 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 12 17:12:58.913900 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.913980 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 12 17:12:58.914054 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 12 17:12:58.914122 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.914193 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 12 17:12:58.914260 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 12 17:12:58.914332 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.914403 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 12 17:12:58.914469 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 12 17:12:58.914536 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.914654 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 12 17:12:58.914733 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 12 17:12:58.914806 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.914817 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 12 17:12:58.915023 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 12 17:12:58.915098 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 12 17:12:58.915164 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:12:58.915175 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:12:58.915183 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:12:58.915196 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:12:58.915269 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 12 17:12:58.915344 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 12 17:12:58.915355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:12:58.915363 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:12:58.915433 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 12 17:12:58.915445 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 12 17:12:58.915452 kernel: thunder_xcv, ver 1.0 Sep 12 17:12:58.915460 kernel: thunder_bgx, ver 1.0 Sep 12 17:12:58.915470 kernel: nicpf, ver 1.0 Sep 12 17:12:58.915478 kernel: nicvf, ver 1.0 Sep 12 17:12:58.915558 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:12:58.915640 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:12:58 UTC (1757697178) Sep 12 17:12:58.915651 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:12:58.915659 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 17:12:58.915667 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:12:58.915675 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:12:58.915687 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:12:58.915695 kernel: Segment Routing with IPv6 Sep 12 17:12:58.915702 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:12:58.915710 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:12:58.915718 kernel: Key type dns_resolver registered Sep 12 17:12:58.915726 kernel: registered taskstats version 1 Sep 12 17:12:58.915733 kernel: Loading compiled-in X.509 certificates Sep 12 17:12:58.915741 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:12:58.915749 kernel: Key type .fscrypt registered Sep 12 17:12:58.915759 kernel: Key type fscrypt-provisioning registered Sep 12 17:12:58.915767 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:12:58.915774 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:12:58.915782 kernel: ima: No architecture policies found Sep 12 17:12:58.915790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:12:58.915798 kernel: clk: Disabling unused clocks Sep 12 17:12:58.915806 kernel: Freeing unused kernel memory: 39488K Sep 12 17:12:58.915813 kernel: Run /init as init process Sep 12 17:12:58.915821 kernel: with arguments: Sep 12 17:12:58.915905 kernel: /init Sep 12 17:12:58.915913 kernel: with environment: Sep 12 17:12:58.915921 kernel: HOME=/ Sep 12 17:12:58.915928 kernel: TERM=linux Sep 12 17:12:58.915936 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:12:58.915946 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:12:58.915957 systemd[1]: Detected virtualization kvm. Sep 12 17:12:58.915966 systemd[1]: Detected architecture arm64. Sep 12 17:12:58.915977 systemd[1]: Running in initrd. Sep 12 17:12:58.915985 systemd[1]: No hostname configured, using default hostname. Sep 12 17:12:58.915993 systemd[1]: Hostname set to . Sep 12 17:12:58.916002 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:12:58.916010 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:12:58.916018 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:12:58.916027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:12:58.916035 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:12:58.916047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:12:58.916056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:12:58.916065 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:12:58.916074 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:12:58.916083 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:12:58.916091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:12:58.916101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:12:58.916110 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:12:58.916118 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:12:58.916126 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:12:58.916134 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:12:58.916143 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:12:58.916151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:12:58.916160 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:12:58.916168 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:12:58.916178 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:12:58.916186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:12:58.916195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:12:58.916203 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:12:58.916211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:12:58.916220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:12:58.916228 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:12:58.916236 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:12:58.916244 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:12:58.916255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:12:58.916264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:12:58.916272 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:12:58.916280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:12:58.916315 systemd-journald[236]: Collecting audit messages is disabled. Sep 12 17:12:58.916339 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:12:58.916349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:12:58.916358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:12:58.916369 systemd-journald[236]: Journal started Sep 12 17:12:58.916388 systemd-journald[236]: Runtime Journal (/run/log/journal/cf2a675b94e2439091dc01b9f3a2dd1c) is 8.0M, max 76.6M, 68.6M free. Sep 12 17:12:58.903242 systemd-modules-load[237]: Inserted module 'overlay' Sep 12 17:12:58.918252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:12:58.924858 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:12:58.925879 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:12:58.931892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:12:58.933658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:12:58.935914 kernel: Bridge firewalling registered Sep 12 17:12:58.935451 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 12 17:12:58.937027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:12:58.939634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:12:58.949251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:12:58.959506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:12:58.964069 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:12:58.966001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:12:58.967662 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:12:58.974010 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:12:58.977184 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:12:58.993268 dracut-cmdline[271]: dracut-dracut-053 Sep 12 17:12:58.996338 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:12:59.009800 systemd-resolved[272]: Positive Trust Anchors: Sep 12 17:12:59.009818 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:12:59.009873 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:12:59.016100 systemd-resolved[272]: Defaulting to hostname 'linux'. Sep 12 17:12:59.018173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:12:59.020463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:12:59.090856 kernel: SCSI subsystem initialized Sep 12 17:12:59.094865 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:12:59.101858 kernel: iscsi: registered transport (tcp) Sep 12 17:12:59.115868 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:12:59.115936 kernel: QLogic iSCSI HBA Driver Sep 12 17:12:59.164928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:12:59.168981 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:12:59.189873 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:12:59.189963 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:12:59.189990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:12:59.255840 kernel: raid6: neonx8 gen() 15695 MB/s Sep 12 17:12:59.257911 kernel: raid6: neonx4 gen() 15606 MB/s Sep 12 17:12:59.273892 kernel: raid6: neonx2 gen() 13167 MB/s Sep 12 17:12:59.290911 kernel: raid6: neonx1 gen() 10466 MB/s Sep 12 17:12:59.307899 kernel: raid6: int64x8 gen() 6930 MB/s Sep 12 17:12:59.324864 kernel: raid6: int64x4 gen() 7318 MB/s Sep 12 17:12:59.341894 kernel: raid6: int64x2 gen() 6098 MB/s Sep 12 17:12:59.358898 kernel: raid6: int64x1 gen() 5027 MB/s Sep 12 17:12:59.358972 kernel: raid6: using algorithm neonx8 gen() 15695 MB/s Sep 12 17:12:59.375897 kernel: raid6: .... xor() 11977 MB/s, rmw enabled Sep 12 17:12:59.375945 kernel: raid6: using neon recovery algorithm Sep 12 17:12:59.381072 kernel: xor: measuring software checksum speed Sep 12 17:12:59.381119 kernel: 8regs : 19769 MB/sec Sep 12 17:12:59.381153 kernel: 32regs : 19674 MB/sec Sep 12 17:12:59.381872 kernel: arm64_neon : 27016 MB/sec Sep 12 17:12:59.381922 kernel: xor: using function: arm64_neon (27016 MB/sec) Sep 12 17:12:59.431873 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:12:59.445869 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:12:59.453153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:12:59.468722 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 12 17:12:59.472155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:12:59.481012 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:12:59.494251 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 12 17:12:59.529745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:12:59.535011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:12:59.584659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:12:59.590197 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:12:59.606361 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:12:59.607454 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:12:59.610083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:12:59.612533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:12:59.620622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:12:59.643919 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:12:59.680060 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:12:59.694206 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:12:59.694332 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 17:12:59.711249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:12:59.711372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:12:59.713401 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:12:59.717783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:12:59.717983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:12:59.719354 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:12:59.721660 kernel: ACPI: bus type USB registered Sep 12 17:12:59.722881 kernel: usbcore: registered new interface driver usbfs Sep 12 17:12:59.722918 kernel: usbcore: registered new interface driver hub Sep 12 17:12:59.722934 kernel: usbcore: registered new device driver usb Sep 12 17:12:59.731127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:12:59.750488 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 12 17:12:59.750957 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 12 17:12:59.753121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:12:59.752456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:12:59.759267 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:12:59.759453 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 12 17:12:59.759552 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 17:12:59.761222 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 12 17:12:59.761427 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 12 17:12:59.761523 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:12:59.762029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:12:59.767402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:12:59.767434 kernel: GPT:17805311 != 80003071 Sep 12 17:12:59.767444 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:12:59.767454 kernel: GPT:17805311 != 80003071 Sep 12 17:12:59.767462 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:12:59.767471 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:12:59.767886 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 12 17:12:59.775359 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:12:59.775635 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 17:12:59.775740 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 17:12:59.780340 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:12:59.780538 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 17:12:59.780674 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 17:12:59.782030 kernel: hub 1-0:1.0: USB hub found Sep 12 17:12:59.782204 kernel: hub 1-0:1.0: 4 ports detected Sep 12 17:12:59.785197 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 17:12:59.789451 kernel: hub 2-0:1.0: USB hub found Sep 12 17:12:59.789793 kernel: hub 2-0:1.0: 4 ports detected Sep 12 17:12:59.802097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:12:59.815041 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (511) Sep 12 17:12:59.827844 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (506) Sep 12 17:12:59.834338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:12:59.846620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 17:12:59.853729 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 17:12:59.856009 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 17:12:59.862545 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 17:12:59.877150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:12:59.886201 disk-uuid[570]: Primary Header is updated. Sep 12 17:12:59.886201 disk-uuid[570]: Secondary Entries is updated. Sep 12 17:12:59.886201 disk-uuid[570]: Secondary Header is updated. Sep 12 17:12:59.892864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:12:59.897910 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:12:59.901869 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:13:00.024971 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 17:13:00.159052 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 12 17:13:00.159131 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 17:13:00.159393 kernel: usbcore: registered new interface driver usbhid Sep 12 17:13:00.160026 kernel: usbhid: USB HID core driver Sep 12 17:13:00.265923 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 12 17:13:00.394896 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 12 17:13:00.449876 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 12 17:13:00.905917 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:13:00.907187 disk-uuid[571]: The operation has completed successfully. Sep 12 17:13:00.961761 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:13:00.962876 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:13:00.973084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:13:00.979317 sh[589]: Success Sep 12 17:13:00.991853 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:13:01.053285 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:13:01.062187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:13:01.063820 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:13:01.093153 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:13:01.093246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:01.093273 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:13:01.093308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:13:01.093905 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:13:01.101894 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:13:01.103954 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:13:01.105299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:13:01.119458 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:13:01.124936 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:13:01.133009 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:13:01.133070 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:01.133083 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:01.136862 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:01.136929 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:01.147466 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:13:01.148348 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:13:01.155161 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:13:01.161123 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:13:01.251876 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:13:01.261144 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:13:01.264966 ignition[669]: Ignition 2.19.0 Sep 12 17:13:01.264974 ignition[669]: Stage: fetch-offline Sep 12 17:13:01.265009 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:01.265018 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:01.265182 ignition[669]: parsed url from cmdline: "" Sep 12 17:13:01.267235 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:13:01.265185 ignition[669]: no config URL provided Sep 12 17:13:01.265190 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:13:01.265197 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:13:01.265202 ignition[669]: failed to fetch config: resource requires networking Sep 12 17:13:01.265484 ignition[669]: Ignition finished successfully Sep 12 17:13:01.284463 systemd-networkd[776]: lo: Link UP Sep 12 17:13:01.284477 systemd-networkd[776]: lo: Gained carrier Sep 12 17:13:01.286095 systemd-networkd[776]: Enumeration completed Sep 12 17:13:01.286580 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:01.286585 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:01.286821 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:13:01.287363 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:01.287367 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:01.288032 systemd-networkd[776]: eth0: Link UP Sep 12 17:13:01.288036 systemd-networkd[776]: eth0: Gained carrier Sep 12 17:13:01.288043 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:01.293070 systemd[1]: Reached target network.target - Network. Sep 12 17:13:01.295180 systemd-networkd[776]: eth1: Link UP Sep 12 17:13:01.295184 systemd-networkd[776]: eth1: Gained carrier Sep 12 17:13:01.295195 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:01.300815 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:13:01.322097 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:13:01.323486 ignition[779]: Ignition 2.19.0 Sep 12 17:13:01.323493 ignition[779]: Stage: fetch Sep 12 17:13:01.324623 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:01.324635 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:01.324734 ignition[779]: parsed url from cmdline: "" Sep 12 17:13:01.324738 ignition[779]: no config URL provided Sep 12 17:13:01.324742 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:13:01.324753 ignition[779]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:13:01.324774 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 17:13:01.326426 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 12 17:13:01.349947 systemd-networkd[776]: eth0: DHCPv4 address 5.75.227.222/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:13:01.526599 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 12 17:13:01.531895 ignition[779]: GET result: OK Sep 12 17:13:01.532045 ignition[779]: parsing config with SHA512: 2532400d3953d86e20deb6c808663ede0d570d933743f77f461f9d879c41865b715d88394f779cdb29ae1b4bb1704309f3c359801a1eed80b7df00ef3a0465ff Sep 12 17:13:01.537733 unknown[779]: fetched base config from "system" Sep 12 17:13:01.537747 unknown[779]: fetched base config from "system" Sep 12 17:13:01.537755 unknown[779]: fetched user config from "hetzner" Sep 12 17:13:01.539441 ignition[779]: fetch: fetch complete Sep 12 17:13:01.539455 ignition[779]: fetch: fetch passed Sep 12 17:13:01.539579 ignition[779]: Ignition finished successfully Sep 12 17:13:01.543983 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:13:01.550030 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:13:01.565042 ignition[786]: Ignition 2.19.0 Sep 12 17:13:01.565057 ignition[786]: Stage: kargs Sep 12 17:13:01.565279 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:01.565289 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:01.567685 ignition[786]: kargs: kargs passed Sep 12 17:13:01.567760 ignition[786]: Ignition finished successfully Sep 12 17:13:01.569064 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:13:01.579201 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:13:01.593559 ignition[792]: Ignition 2.19.0 Sep 12 17:13:01.593573 ignition[792]: Stage: disks Sep 12 17:13:01.593809 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:01.593820 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:01.596611 ignition[792]: disks: disks passed Sep 12 17:13:01.596667 ignition[792]: Ignition finished successfully Sep 12 17:13:01.599636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:13:01.601084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:13:01.602248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:13:01.603531 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:13:01.604679 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:13:01.606381 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:13:01.614075 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:13:01.635533 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:13:01.641538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:13:01.647938 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:13:01.700896 kernel: EXT4-fs (sda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:13:01.701374 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:13:01.703053 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:13:01.714029 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:13:01.717258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:13:01.726264 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:13:01.727868 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:13:01.730697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:13:01.733406 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:13:01.736018 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (809) Sep 12 17:13:01.738067 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:13:01.738108 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:01.738120 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:01.744447 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:13:01.749852 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:01.749904 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:01.754099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:13:01.791064 coreos-metadata[811]: Sep 12 17:13:01.790 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 17:13:01.796840 coreos-metadata[811]: Sep 12 17:13:01.794 INFO Fetch successful Sep 12 17:13:01.796840 coreos-metadata[811]: Sep 12 17:13:01.795 INFO wrote hostname ci-4081-3-6-0-ae88ce84d6 to /sysroot/etc/hostname Sep 12 17:13:01.799299 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:13:01.803961 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:13:01.808985 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:13:01.814041 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:13:01.818965 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:13:01.914646 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:13:01.920040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:13:01.924705 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:13:01.931850 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:13:01.960295 ignition[925]: INFO : Ignition 2.19.0 Sep 12 17:13:01.960295 ignition[925]: INFO : Stage: mount Sep 12 17:13:01.960220 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:13:01.965320 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:01.965320 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:01.965320 ignition[925]: INFO : mount: mount passed Sep 12 17:13:01.965320 ignition[925]: INFO : Ignition finished successfully Sep 12 17:13:01.967910 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:13:01.978015 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:13:02.093867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:13:02.103131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:13:02.114883 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Sep 12 17:13:02.114941 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:13:02.114952 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:02.114962 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:02.117842 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:02.117901 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:02.119924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:13:02.141323 ignition[955]: INFO : Ignition 2.19.0 Sep 12 17:13:02.141323 ignition[955]: INFO : Stage: files Sep 12 17:13:02.142658 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:02.142658 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:02.142658 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:13:02.145785 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:13:02.145785 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:13:02.149923 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:13:02.150814 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:13:02.152005 unknown[955]: wrote ssh authorized keys file for user: core Sep 12 17:13:02.153407 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:13:02.155003 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:13:02.155003 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:13:02.155003 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:13:02.158398 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:13:02.216991 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:13:02.542575 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:13:02.542575 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:13:02.545261 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:13:02.545261 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:13:02.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:13:02.851772 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:13:03.099600 systemd-networkd[776]: eth0: Gained IPv6LL Sep 12 17:13:03.227746 systemd-networkd[776]: eth1: Gained IPv6LL Sep 12 17:13:03.417768 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:13:03.417768 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:13:03.420992 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:13:03.420992 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:13:03.420992 ignition[955]: INFO : files: files passed Sep 12 17:13:03.420992 ignition[955]: INFO : Ignition finished successfully Sep 12 17:13:03.423125 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:13:03.429666 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:13:03.435042 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:13:03.439501 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:13:03.439872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:13:03.451331 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:03.451331 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:03.454388 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:03.458198 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:13:03.459156 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:13:03.465067 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:13:03.505578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:13:03.505695 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:13:03.507231 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:13:03.509073 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:13:03.510784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:13:03.514023 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:13:03.532387 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:13:03.542114 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:13:03.557405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:13:03.558873 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:13:03.560144 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:13:03.560711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:13:03.560860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:13:03.561691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:13:03.563986 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:13:03.564511 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:13:03.565146 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:13:03.565789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:13:03.566536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:13:03.568111 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:13:03.569539 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:13:03.570773 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:13:03.571853 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:13:03.572767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:13:03.572913 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:13:03.574376 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:13:03.575051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:13:03.576062 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:13:03.577853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:13:03.578509 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:13:03.578646 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:13:03.580200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:13:03.580330 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:13:03.581458 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:13:03.581599 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:13:03.582573 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:13:03.582672 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:13:03.593588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:13:03.599249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:13:03.600661 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:13:03.601383 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:13:03.603745 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:13:03.603884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:13:03.608179 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:13:03.608865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:13:03.612684 ignition[1008]: INFO : Ignition 2.19.0 Sep 12 17:13:03.612684 ignition[1008]: INFO : Stage: umount Sep 12 17:13:03.616908 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:03.616908 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:03.616908 ignition[1008]: INFO : umount: umount passed Sep 12 17:13:03.616908 ignition[1008]: INFO : Ignition finished successfully Sep 12 17:13:03.619334 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:13:03.619458 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:13:03.620324 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:13:03.620373 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:13:03.621103 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:13:03.621162 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:13:03.623823 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:13:03.623960 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:13:03.627616 systemd[1]: Stopped target network.target - Network. Sep 12 17:13:03.628908 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:13:03.629015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:13:03.631039 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:13:03.632333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:13:03.635880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:13:03.637365 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:13:03.638791 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:13:03.640098 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:13:03.640184 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:13:03.641392 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:13:03.641469 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:13:03.643008 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:13:03.643068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:13:03.643799 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:13:03.643863 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:13:03.644864 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:13:03.646052 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:13:03.648012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:13:03.648619 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:13:03.648706 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:13:03.651499 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:13:03.651632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:13:03.652950 systemd-networkd[776]: eth0: DHCPv6 lease lost Sep 12 17:13:03.653902 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:13:03.654029 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:13:03.656672 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:13:03.656766 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:13:03.658405 systemd-networkd[776]: eth1: DHCPv6 lease lost Sep 12 17:13:03.662753 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:13:03.662911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:13:03.664059 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:13:03.664108 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:13:03.672071 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:13:03.672585 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:13:03.672650 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:13:03.674716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:13:03.674762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:13:03.675407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:13:03.675445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:13:03.676974 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:13:03.690595 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:13:03.690735 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:13:03.696684 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:13:03.696931 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:13:03.698634 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:13:03.698691 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:13:03.699609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:13:03.699645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:13:03.700669 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:13:03.700721 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:13:03.702231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:13:03.702278 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:13:03.703688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:13:03.703732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:13:03.719282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:13:03.720632 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:13:03.720733 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:13:03.721898 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:13:03.721958 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:13:03.722960 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:13:03.723005 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:13:03.724273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:13:03.724315 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:03.731385 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:13:03.732128 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:13:03.733131 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:13:03.741115 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:13:03.751940 systemd[1]: Switching root. Sep 12 17:13:03.786144 systemd-journald[236]: Journal stopped Sep 12 17:13:04.749390 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 12 17:13:04.749463 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:13:04.749477 kernel: SELinux: policy capability open_perms=1 Sep 12 17:13:04.749486 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:13:04.749496 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:13:04.749517 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:13:04.749528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:13:04.749538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:13:04.749547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:13:04.749562 systemd[1]: Successfully loaded SELinux policy in 36.130ms. Sep 12 17:13:04.749588 kernel: audit: type=1403 audit(1757697183.981:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:13:04.749599 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.840ms. Sep 12 17:13:04.749610 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:13:04.749621 systemd[1]: Detected virtualization kvm. Sep 12 17:13:04.749631 systemd[1]: Detected architecture arm64. Sep 12 17:13:04.749642 systemd[1]: Detected first boot. Sep 12 17:13:04.749652 systemd[1]: Hostname set to . Sep 12 17:13:04.749665 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:13:04.749678 zram_generator::config[1068]: No configuration found. Sep 12 17:13:04.749693 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:13:04.749703 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:13:04.749713 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:13:04.749725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:13:04.749735 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:13:04.749750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:13:04.749760 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:13:04.749773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:13:04.749783 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:13:04.749795 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:13:04.749805 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:13:04.749815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:13:04.749882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:13:04.749897 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:13:04.749908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:13:04.749919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:13:04.749932 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:13:04.749943 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:13:04.749954 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:13:04.749966 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:13:04.749977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:13:04.749988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:13:04.749999 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:13:04.750010 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:13:04.750020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:13:04.750031 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:13:04.750041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:13:04.750052 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:13:04.750063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:13:04.750074 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:13:04.750084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:13:04.750095 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:13:04.750107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:13:04.750118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:13:04.750129 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:13:04.750140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:13:04.750150 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:13:04.750161 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:13:04.750174 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:13:04.750187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:04.750199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:13:04.750210 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:13:04.750221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:04.750231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:13:04.750242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:04.750252 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:13:04.750264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:04.750276 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:13:04.750287 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:13:04.750298 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:13:04.750309 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:13:04.750319 kernel: ACPI: bus type drm_connector registered Sep 12 17:13:04.750328 kernel: fuse: init (API version 7.39) Sep 12 17:13:04.750339 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:13:04.750350 kernel: loop: module loaded Sep 12 17:13:04.750361 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:13:04.750371 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:13:04.750382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:13:04.750393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:13:04.750403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:13:04.750414 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:13:04.750425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:13:04.750461 systemd-journald[1160]: Collecting audit messages is disabled. Sep 12 17:13:04.750488 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:13:04.750500 systemd-journald[1160]: Journal started Sep 12 17:13:04.750531 systemd-journald[1160]: Runtime Journal (/run/log/journal/cf2a675b94e2439091dc01b9f3a2dd1c) is 8.0M, max 76.6M, 68.6M free. Sep 12 17:13:04.752022 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:13:04.753032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:13:04.754069 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:13:04.755046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:13:04.755939 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:13:04.756094 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:13:04.757234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:04.757388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:04.758396 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:13:04.758625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:13:04.759600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:04.759754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:04.760707 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:13:04.760889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:13:04.761680 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:04.764044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:04.765386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:13:04.767383 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:13:04.769331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:13:04.781017 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:13:04.787025 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:13:04.791970 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:13:04.792577 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:13:04.798748 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:13:04.810336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:13:04.811414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:04.816012 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:13:04.816626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:04.824918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:13:04.830095 systemd-journald[1160]: Time spent on flushing to /var/log/journal/cf2a675b94e2439091dc01b9f3a2dd1c is 38.987ms for 1111 entries. Sep 12 17:13:04.830095 systemd-journald[1160]: System Journal (/var/log/journal/cf2a675b94e2439091dc01b9f3a2dd1c) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:13:04.881778 systemd-journald[1160]: Received client request to flush runtime journal. Sep 12 17:13:04.834995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:13:04.838439 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:13:04.842968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:13:04.846196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:13:04.853765 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:13:04.856817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:13:04.859882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:13:04.888352 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:13:04.895034 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:13:04.896175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:13:04.905378 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 12 17:13:04.905419 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 12 17:13:04.915809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:13:04.924181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:13:04.966549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:13:04.975060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:13:04.989005 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 17:13:04.989021 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 17:13:04.997020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:13:05.406462 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:13:05.413013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:13:05.436369 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Sep 12 17:13:05.456958 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:13:05.471012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:13:05.492207 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:13:05.528374 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 12 17:13:05.564664 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:13:05.653511 systemd-networkd[1243]: lo: Link UP Sep 12 17:13:05.653522 systemd-networkd[1243]: lo: Gained carrier Sep 12 17:13:05.656251 systemd-networkd[1243]: Enumeration completed Sep 12 17:13:05.656957 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:13:05.659121 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:05.659132 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:05.659949 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:05.659953 systemd-networkd[1243]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:05.660435 systemd-networkd[1243]: eth0: Link UP Sep 12 17:13:05.660447 systemd-networkd[1243]: eth0: Gained carrier Sep 12 17:13:05.660461 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:05.667109 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:13:05.667236 systemd-networkd[1243]: eth1: Link UP Sep 12 17:13:05.667241 systemd-networkd[1243]: eth1: Gained carrier Sep 12 17:13:05.667261 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:05.692919 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:13:05.692298 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:05.696141 systemd-networkd[1243]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:13:05.714045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:05.719422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:05.730413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:05.740945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:05.741483 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:13:05.741554 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:13:05.741903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:05.742074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:05.752164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:05.752358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:05.755380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:05.757018 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1241) Sep 12 17:13:05.757000 systemd-networkd[1243]: eth0: DHCPv4 address 5.75.227.222/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:13:05.769059 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:05.771182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:05.781309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:05.836274 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:13:05.840991 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 12 17:13:05.841058 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:13:05.841072 kernel: [drm] features: -context_init Sep 12 17:13:05.841083 kernel: [drm] number of scanouts: 1 Sep 12 17:13:05.841094 kernel: [drm] number of cap sets: 0 Sep 12 17:13:05.843535 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 17:13:05.848939 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:13:05.850087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:05.854199 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:13:05.867132 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:13:05.867486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:05.876071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:05.941040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:05.998458 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:13:06.005012 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:13:06.018855 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:13:06.046360 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:13:06.048789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:13:06.054032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:13:06.060427 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:13:06.086656 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:13:06.087575 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:13:06.088818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:13:06.089048 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:13:06.090096 systemd[1]: Reached target machines.target - Containers. Sep 12 17:13:06.092200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:13:06.098109 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:13:06.104064 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:13:06.105038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:06.108143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:13:06.112043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:13:06.116245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:13:06.120417 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:13:06.138109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:13:06.149885 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:13:06.156500 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:13:06.159715 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:13:06.173866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:13:06.194890 kernel: loop1: detected capacity change from 0 to 203944 Sep 12 17:13:06.235046 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:13:06.284868 kernel: loop3: detected capacity change from 0 to 8 Sep 12 17:13:06.306996 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 17:13:06.323764 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 17:13:06.337991 kernel: loop6: detected capacity change from 0 to 114328 Sep 12 17:13:06.349948 kernel: loop7: detected capacity change from 0 to 8 Sep 12 17:13:06.351324 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 17:13:06.351755 (sd-merge)[1329]: Merged extensions into '/usr'. Sep 12 17:13:06.366938 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:13:06.367097 systemd[1]: Reloading... Sep 12 17:13:06.424948 zram_generator::config[1357]: No configuration found. Sep 12 17:13:06.533176 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:13:06.570575 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:13:06.637846 systemd[1]: Reloading finished in 270 ms. Sep 12 17:13:06.657432 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:13:06.658992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:13:06.667056 systemd[1]: Starting ensure-sysext.service... Sep 12 17:13:06.670073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:13:06.693066 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:13:06.693591 systemd[1]: Reloading... Sep 12 17:13:06.714079 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:13:06.714358 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:13:06.715451 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:13:06.715870 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 12 17:13:06.715963 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 12 17:13:06.719988 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:13:06.720003 systemd-tmpfiles[1403]: Skipping /boot Sep 12 17:13:06.729717 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:13:06.729734 systemd-tmpfiles[1403]: Skipping /boot Sep 12 17:13:06.777858 zram_generator::config[1435]: No configuration found. Sep 12 17:13:06.884338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:13:06.955197 systemd[1]: Reloading finished in 260 ms. Sep 12 17:13:06.974476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:13:07.000242 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:13:07.005167 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:13:07.011619 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:13:07.018337 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:13:07.023308 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:13:07.030988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:07.037100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:07.046150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:07.055065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:07.056682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:07.060032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:07.060216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:07.063169 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:13:07.067727 systemd-networkd[1243]: eth0: Gained IPv6LL Sep 12 17:13:07.070712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:07.071219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:07.077769 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:13:07.080625 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:07.081304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:07.102539 systemd[1]: Finished ensure-sysext.service. Sep 12 17:13:07.105886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:13:07.109624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:07.111977 augenrules[1510]: No rules Sep 12 17:13:07.115042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:07.120085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:13:07.125659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:07.131077 systemd-networkd[1243]: eth1: Gained IPv6LL Sep 12 17:13:07.136464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:07.140052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:07.154127 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:13:07.157892 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:13:07.160749 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:13:07.164323 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:13:07.166297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:07.166472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:07.168199 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:13:07.168365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:13:07.174320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:07.174710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:07.177369 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:07.177592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:07.189370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:13:07.198248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:07.198342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:07.198369 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:13:07.200407 systemd-resolved[1486]: Positive Trust Anchors: Sep 12 17:13:07.200690 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:13:07.200779 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:13:07.205995 systemd-resolved[1486]: Using system hostname 'ci-4081-3-6-0-ae88ce84d6'. Sep 12 17:13:07.208353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:13:07.209256 systemd[1]: Reached target network.target - Network. Sep 12 17:13:07.209907 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:13:07.210618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:13:07.242135 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:13:07.245594 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:13:07.247257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:13:07.248935 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:13:07.249649 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:13:07.250418 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:13:07.250452 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:13:07.250931 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:13:07.251662 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:13:07.252362 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:13:07.253002 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:13:07.254637 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:13:07.256914 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:13:07.258742 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:13:07.262319 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:13:07.262959 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:13:07.263430 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:13:07.264104 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:13:07.264146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:13:07.264167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:13:07.266991 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:13:07.270135 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:13:07.277212 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:13:07.280890 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:13:07.284820 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:13:07.285396 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:13:07.293980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:13:07.308377 coreos-metadata[1545]: Sep 12 17:13:07.307 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 17:13:07.310329 coreos-metadata[1545]: Sep 12 17:13:07.309 INFO Fetch successful Sep 12 17:13:07.310329 coreos-metadata[1545]: Sep 12 17:13:07.309 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 17:13:07.310070 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:13:07.311244 coreos-metadata[1545]: Sep 12 17:13:07.311 INFO Fetch successful Sep 12 17:13:07.315078 jq[1548]: false Sep 12 17:13:07.319303 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:13:07.328236 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:13:07.329262 dbus-daemon[1546]: [system] SELinux support is enabled Sep 12 17:13:07.342863 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 17:13:07.347463 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:13:07.350057 extend-filesystems[1551]: Found loop4 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found loop5 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found loop6 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found loop7 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda1 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda2 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda3 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found usr Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda4 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda6 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda7 Sep 12 17:13:07.350057 extend-filesystems[1551]: Found sda9 Sep 12 17:13:07.350057 extend-filesystems[1551]: Checking size of /dev/sda9 Sep 12 17:13:07.366607 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:13:07.370207 systemd-timesyncd[1529]: Contacted time server 131.188.3.221:123 (0.flatcar.pool.ntp.org). Sep 12 17:13:07.370273 systemd-timesyncd[1529]: Initial clock synchronization to Fri 2025-09-12 17:13:07.729542 UTC. Sep 12 17:13:07.372314 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:13:07.376363 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:13:07.382316 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:13:07.390551 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:13:07.391214 extend-filesystems[1551]: Resized partition /dev/sda9 Sep 12 17:13:07.406512 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:13:07.416068 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 17:13:07.407649 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:13:07.421914 jq[1584]: true Sep 12 17:13:07.422256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:13:07.422561 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:13:07.423438 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:13:07.423700 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:13:07.428361 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:13:07.451157 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:13:07.451423 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:13:07.494697 update_engine[1582]: I20250912 17:13:07.491317 1582 main.cc:92] Flatcar Update Engine starting Sep 12 17:13:07.518511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1241) Sep 12 17:13:07.513429 (ntainerd)[1599]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:13:07.518878 update_engine[1582]: I20250912 17:13:07.498880 1582 update_check_scheduler.cc:74] Next update check in 6m50s Sep 12 17:13:07.518927 jq[1598]: true Sep 12 17:13:07.532532 tar[1596]: linux-arm64/helm Sep 12 17:13:07.567648 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:13:07.569612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:13:07.569656 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:13:07.572080 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:13:07.572110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:13:07.574244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:13:07.576292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:13:07.630560 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:13:07.633826 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:13:07.641219 systemd-logind[1578]: New seat seat0. Sep 12 17:13:07.654049 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:13:07.654108 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 12 17:13:07.654511 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:13:07.693842 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 17:13:07.712542 extend-filesystems[1586]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:13:07.712542 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 17:13:07.712542 extend-filesystems[1586]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 17:13:07.720362 extend-filesystems[1551]: Resized filesystem in /dev/sda9 Sep 12 17:13:07.720362 extend-filesystems[1551]: Found sr0 Sep 12 17:13:07.719871 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:13:07.723527 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:13:07.720133 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:13:07.732611 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:13:07.785364 systemd[1]: Starting sshkeys.service... Sep 12 17:13:07.810166 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:13:07.821847 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:13:07.887461 coreos-metadata[1654]: Sep 12 17:13:07.887 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 17:13:07.894584 coreos-metadata[1654]: Sep 12 17:13:07.894 INFO Fetch successful Sep 12 17:13:07.900959 unknown[1654]: wrote ssh authorized keys file for user: core Sep 12 17:13:07.937103 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:13:07.952309 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:13:07.953261 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:13:07.968303 systemd[1]: Finished sshkeys.service. Sep 12 17:13:07.992675 containerd[1599]: time="2025-09-12T17:13:07.992567640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:13:08.092996 containerd[1599]: time="2025-09-12T17:13:08.090443308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.093354 containerd[1599]: time="2025-09-12T17:13:08.093298368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:08.093354 containerd[1599]: time="2025-09-12T17:13:08.093350907Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:13:08.093402 containerd[1599]: time="2025-09-12T17:13:08.093370678Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:13:08.093623 containerd[1599]: time="2025-09-12T17:13:08.093597513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:13:08.093660 containerd[1599]: time="2025-09-12T17:13:08.093626562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.093721 containerd[1599]: time="2025-09-12T17:13:08.093700710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:08.093721 containerd[1599]: time="2025-09-12T17:13:08.093718056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.094112 containerd[1599]: time="2025-09-12T17:13:08.094081318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:08.094138 containerd[1599]: time="2025-09-12T17:13:08.094111705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.094138 containerd[1599]: time="2025-09-12T17:13:08.094128675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:08.094187 containerd[1599]: time="2025-09-12T17:13:08.094141381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.094256 containerd[1599]: time="2025-09-12T17:13:08.094234924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.095227 containerd[1599]: time="2025-09-12T17:13:08.095194719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:08.095929 containerd[1599]: time="2025-09-12T17:13:08.095861013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:08.095976 containerd[1599]: time="2025-09-12T17:13:08.095930564Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:13:08.096068 containerd[1599]: time="2025-09-12T17:13:08.096042497Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:13:08.096105 containerd[1599]: time="2025-09-12T17:13:08.096093741Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.105820431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.105917526Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.105942563Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.105960452Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.105981602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:13:08.106475 containerd[1599]: time="2025-09-12T17:13:08.106204299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:13:08.106680 containerd[1599]: time="2025-09-12T17:13:08.106543444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:13:08.106680 containerd[1599]: time="2025-09-12T17:13:08.106648355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:13:08.106680 containerd[1599]: time="2025-09-12T17:13:08.106667122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:13:08.106736 containerd[1599]: time="2025-09-12T17:13:08.106681710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:13:08.106736 containerd[1599]: time="2025-09-12T17:13:08.106696882Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106736 containerd[1599]: time="2025-09-12T17:13:08.106715148Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106736 containerd[1599]: time="2025-09-12T17:13:08.106729902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106807 containerd[1599]: time="2025-09-12T17:13:08.106745702Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106807 containerd[1599]: time="2025-09-12T17:13:08.106761167Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106807 containerd[1599]: time="2025-09-12T17:13:08.106774625Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106807 containerd[1599]: time="2025-09-12T17:13:08.106788126Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106807 containerd[1599]: time="2025-09-12T17:13:08.106800832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:13:08.106911 containerd[1599]: time="2025-09-12T17:13:08.106832390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.106911 containerd[1599]: time="2025-09-12T17:13:08.106848231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.106911 containerd[1599]: time="2025-09-12T17:13:08.106861230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.106911 containerd[1599]: time="2025-09-12T17:13:08.106901146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106916695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106933080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106946664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106960373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106973707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.106988419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.107009276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107066 containerd[1599]: time="2025-09-12T17:13:08.107023153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107220 containerd[1599]: time="2025-09-12T17:13:08.107083634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107220 containerd[1599]: time="2025-09-12T17:13:08.107104240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:13:08.107220 containerd[1599]: time="2025-09-12T17:13:08.107125682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107220 containerd[1599]: time="2025-09-12T17:13:08.107138222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107220 containerd[1599]: time="2025-09-12T17:13:08.107149758Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:13:08.107306 containerd[1599]: time="2025-09-12T17:13:08.107263614Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:13:08.107306 containerd[1599]: time="2025-09-12T17:13:08.107284011Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:13:08.107306 containerd[1599]: time="2025-09-12T17:13:08.107295673Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:13:08.107368 containerd[1599]: time="2025-09-12T17:13:08.107308337Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:13:08.107368 containerd[1599]: time="2025-09-12T17:13:08.107318661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.107368 containerd[1599]: time="2025-09-12T17:13:08.107331326Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:13:08.107368 containerd[1599]: time="2025-09-12T17:13:08.107342110Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:13:08.107368 containerd[1599]: time="2025-09-12T17:13:08.107357784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:13:08.109410 containerd[1599]: time="2025-09-12T17:13:08.108837540Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:13:08.109410 containerd[1599]: time="2025-09-12T17:13:08.109021281Z" level=info msg="Connect containerd service" Sep 12 17:13:08.109410 containerd[1599]: time="2025-09-12T17:13:08.109220237Z" level=info msg="using legacy CRI server" Sep 12 17:13:08.109410 containerd[1599]: time="2025-09-12T17:13:08.109236037Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:13:08.113888 containerd[1599]: time="2025-09-12T17:13:08.112585895Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:13:08.118883 containerd[1599]: time="2025-09-12T17:13:08.116582569Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:13:08.118883 containerd[1599]: time="2025-09-12T17:13:08.118337353Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:13:08.118883 containerd[1599]: time="2025-09-12T17:13:08.118384709Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:13:08.118883 containerd[1599]: time="2025-09-12T17:13:08.118482265Z" level=info msg="Start subscribing containerd event" Sep 12 17:13:08.120894 containerd[1599]: time="2025-09-12T17:13:08.119327911Z" level=info msg="Start recovering state" Sep 12 17:13:08.120894 containerd[1599]: time="2025-09-12T17:13:08.119432823Z" level=info msg="Start event monitor" Sep 12 17:13:08.120894 containerd[1599]: time="2025-09-12T17:13:08.119448789Z" level=info msg="Start snapshots syncer" Sep 12 17:13:08.120894 containerd[1599]: time="2025-09-12T17:13:08.119459448Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:13:08.120894 containerd[1599]: time="2025-09-12T17:13:08.119468936Z" level=info msg="Start streaming server" Sep 12 17:13:08.120583 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:13:08.124697 containerd[1599]: time="2025-09-12T17:13:08.124660015Z" level=info msg="containerd successfully booted in 0.137469s" Sep 12 17:13:08.544663 tar[1596]: linux-arm64/LICENSE Sep 12 17:13:08.545115 tar[1596]: linux-arm64/README.md Sep 12 17:13:08.564837 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:13:08.565575 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:13:08.611626 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:13:08.615429 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:13:08.634254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:13:08.639810 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:13:08.640202 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:13:08.646299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:13:08.646926 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:13:08.662761 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:13:08.670418 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:13:08.677282 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:13:08.679481 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:13:08.680836 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:13:08.681646 systemd[1]: Startup finished in 6.062s (kernel) + 4.735s (userspace) = 10.797s. Sep 12 17:13:09.184784 kubelet[1696]: E0912 17:13:09.184731 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:13:09.189355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:13:09.189812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:13:19.441285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:13:19.449215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:13:19.672229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:13:19.692592 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:13:19.753663 kubelet[1728]: E0912 17:13:19.753600 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:13:19.758031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:13:19.758454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:13:30.009972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:13:30.020259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:13:30.134049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:13:30.147622 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:13:30.198976 kubelet[1747]: E0912 17:13:30.198912 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:13:30.203075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:13:30.203287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:13:37.202350 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:13:37.217412 systemd[1]: Started sshd@0-5.75.227.222:22-139.178.89.65:37932.service - OpenSSH per-connection server daemon (139.178.89.65:37932). Sep 12 17:13:38.216288 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 37932 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:38.219597 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:38.229559 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:13:38.241227 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:13:38.246334 systemd-logind[1578]: New session 1 of user core. Sep 12 17:13:38.257073 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:13:38.265128 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:13:38.269100 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:13:38.377983 systemd[1762]: Queued start job for default target default.target. Sep 12 17:13:38.378353 systemd[1762]: Created slice app.slice - User Application Slice. Sep 12 17:13:38.378371 systemd[1762]: Reached target paths.target - Paths. Sep 12 17:13:38.378381 systemd[1762]: Reached target timers.target - Timers. Sep 12 17:13:38.382925 systemd[1762]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:13:38.390824 systemd[1762]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:13:38.391023 systemd[1762]: Reached target sockets.target - Sockets. Sep 12 17:13:38.391104 systemd[1762]: Reached target basic.target - Basic System. Sep 12 17:13:38.391205 systemd[1762]: Reached target default.target - Main User Target. Sep 12 17:13:38.391232 systemd[1762]: Startup finished in 116ms. Sep 12 17:13:38.391449 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:13:38.400478 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:13:39.099588 systemd[1]: Started sshd@1-5.75.227.222:22-139.178.89.65:37938.service - OpenSSH per-connection server daemon (139.178.89.65:37938). Sep 12 17:13:40.081590 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 37938 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:40.083748 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:40.088668 systemd-logind[1578]: New session 2 of user core. Sep 12 17:13:40.099379 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:13:40.454190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:13:40.466482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:13:40.581116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:13:40.592615 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:13:40.637360 kubelet[1790]: E0912 17:13:40.637276 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:13:40.641546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:13:40.641731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:13:40.766203 sshd[1774]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:40.771118 systemd[1]: sshd@1-5.75.227.222:22-139.178.89.65:37938.service: Deactivated successfully. Sep 12 17:13:40.775192 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:13:40.776463 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:13:40.777390 systemd-logind[1578]: Removed session 2. Sep 12 17:13:40.939338 systemd[1]: Started sshd@2-5.75.227.222:22-139.178.89.65:52844.service - OpenSSH per-connection server daemon (139.178.89.65:52844). Sep 12 17:13:41.924393 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 52844 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:41.926306 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:41.931261 systemd-logind[1578]: New session 3 of user core. Sep 12 17:13:41.938343 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:13:42.607288 sshd[1802]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:42.612325 systemd[1]: sshd@2-5.75.227.222:22-139.178.89.65:52844.service: Deactivated successfully. Sep 12 17:13:42.617660 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:13:42.619395 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:13:42.620775 systemd-logind[1578]: Removed session 3. Sep 12 17:13:42.775696 systemd[1]: Started sshd@3-5.75.227.222:22-139.178.89.65:52860.service - OpenSSH per-connection server daemon (139.178.89.65:52860). Sep 12 17:13:43.758916 sshd[1810]: Accepted publickey for core from 139.178.89.65 port 52860 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:43.761270 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:43.767487 systemd-logind[1578]: New session 4 of user core. Sep 12 17:13:43.778461 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:13:44.445210 sshd[1810]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:44.450282 systemd[1]: sshd@3-5.75.227.222:22-139.178.89.65:52860.service: Deactivated successfully. Sep 12 17:13:44.453607 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:13:44.454711 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:13:44.455419 systemd-logind[1578]: Removed session 4. Sep 12 17:13:44.612583 systemd[1]: Started sshd@4-5.75.227.222:22-139.178.89.65:52862.service - OpenSSH per-connection server daemon (139.178.89.65:52862). Sep 12 17:13:45.593597 sshd[1818]: Accepted publickey for core from 139.178.89.65 port 52862 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:45.595727 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:45.601144 systemd-logind[1578]: New session 5 of user core. Sep 12 17:13:45.608274 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:13:46.128002 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:13:46.128360 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:13:46.145111 sudo[1822]: pam_unix(sudo:session): session closed for user root Sep 12 17:13:46.306441 sshd[1818]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:46.312512 systemd[1]: sshd@4-5.75.227.222:22-139.178.89.65:52862.service: Deactivated successfully. Sep 12 17:13:46.316072 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:13:46.316433 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:13:46.317676 systemd-logind[1578]: Removed session 5. Sep 12 17:13:46.479860 systemd[1]: Started sshd@5-5.75.227.222:22-139.178.89.65:52878.service - OpenSSH per-connection server daemon (139.178.89.65:52878). Sep 12 17:13:47.478123 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 52878 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:47.480328 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:47.488550 systemd-logind[1578]: New session 6 of user core. Sep 12 17:13:47.495454 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:13:48.011515 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:13:48.012145 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:13:48.015963 sudo[1832]: pam_unix(sudo:session): session closed for user root Sep 12 17:13:48.022680 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:13:48.023339 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:13:48.050348 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:13:48.054178 auditctl[1835]: No rules Sep 12 17:13:48.054796 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:13:48.055168 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:13:48.064451 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:13:48.091459 augenrules[1854]: No rules Sep 12 17:13:48.093777 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:13:48.095910 sudo[1831]: pam_unix(sudo:session): session closed for user root Sep 12 17:13:48.259260 sshd[1827]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:48.263992 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:13:48.265428 systemd[1]: sshd@5-5.75.227.222:22-139.178.89.65:52878.service: Deactivated successfully. Sep 12 17:13:48.270503 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:13:48.272461 systemd-logind[1578]: Removed session 6. Sep 12 17:13:48.440756 systemd[1]: Started sshd@6-5.75.227.222:22-139.178.89.65:52894.service - OpenSSH per-connection server daemon (139.178.89.65:52894). Sep 12 17:13:49.439419 sshd[1863]: Accepted publickey for core from 139.178.89.65 port 52894 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:13:49.441417 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:49.447823 systemd-logind[1578]: New session 7 of user core. Sep 12 17:13:49.455437 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:13:49.968566 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:13:49.968954 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:13:50.295221 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:13:50.298514 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:13:50.555147 dockerd[1882]: time="2025-09-12T17:13:50.554931622Z" level=info msg="Starting up" Sep 12 17:13:50.639143 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport866855590-merged.mount: Deactivated successfully. Sep 12 17:13:50.656084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:13:50.661901 dockerd[1882]: time="2025-09-12T17:13:50.661749236Z" level=info msg="Loading containers: start." Sep 12 17:13:50.664505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:13:50.798861 kernel: Initializing XFRM netlink socket Sep 12 17:13:50.834103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:13:50.863475 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:13:50.912019 systemd-networkd[1243]: docker0: Link UP Sep 12 17:13:50.922440 kubelet[1960]: E0912 17:13:50.922320 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:13:50.927023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:13:50.927401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:13:50.932637 dockerd[1882]: time="2025-09-12T17:13:50.932051420Z" level=info msg="Loading containers: done." Sep 12 17:13:50.950452 dockerd[1882]: time="2025-09-12T17:13:50.950345509Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:13:50.950679 dockerd[1882]: time="2025-09-12T17:13:50.950510879Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:13:50.950679 dockerd[1882]: time="2025-09-12T17:13:50.950654157Z" level=info msg="Daemon has completed initialization" Sep 12 17:13:50.990877 dockerd[1882]: time="2025-09-12T17:13:50.989844987Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:13:50.990096 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:13:52.033424 containerd[1599]: time="2025-09-12T17:13:52.033169813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:13:52.707295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250171936.mount: Deactivated successfully. Sep 12 17:13:52.886420 update_engine[1582]: I20250912 17:13:52.885897 1582 update_attempter.cc:509] Updating boot flags... Sep 12 17:13:52.943159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2064) Sep 12 17:13:53.465323 containerd[1599]: time="2025-09-12T17:13:53.465277410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.468089 containerd[1599]: time="2025-09-12T17:13:53.468018887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687423" Sep 12 17:13:53.469587 containerd[1599]: time="2025-09-12T17:13:53.469121360Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.475372 containerd[1599]: time="2025-09-12T17:13:53.475304279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.477800 containerd[1599]: time="2025-09-12T17:13:53.477717563Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.444496166s" Sep 12 17:13:53.477953 containerd[1599]: time="2025-09-12T17:13:53.477800682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:13:53.480261 containerd[1599]: time="2025-09-12T17:13:53.479991222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:13:54.585875 containerd[1599]: time="2025-09-12T17:13:54.585784843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:54.588266 containerd[1599]: time="2025-09-12T17:13:54.588219722Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459787" Sep 12 17:13:54.590951 containerd[1599]: time="2025-09-12T17:13:54.589873295Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:54.593708 containerd[1599]: time="2025-09-12T17:13:54.593663054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:54.594788 containerd[1599]: time="2025-09-12T17:13:54.594736970Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.114701328s" Sep 12 17:13:54.594788 containerd[1599]: time="2025-09-12T17:13:54.594785111Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:13:54.595488 containerd[1599]: time="2025-09-12T17:13:54.595437520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:13:55.536187 containerd[1599]: time="2025-09-12T17:13:55.536124704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:55.537672 containerd[1599]: time="2025-09-12T17:13:55.537632580Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127526" Sep 12 17:13:55.538817 containerd[1599]: time="2025-09-12T17:13:55.538400424Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:55.542684 containerd[1599]: time="2025-09-12T17:13:55.542638652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:55.544018 containerd[1599]: time="2025-09-12T17:13:55.543978857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 948.500279ms" Sep 12 17:13:55.544152 containerd[1599]: time="2025-09-12T17:13:55.544130681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:13:55.544946 containerd[1599]: time="2025-09-12T17:13:55.544919014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:13:56.554766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207068657.mount: Deactivated successfully. Sep 12 17:13:56.851914 containerd[1599]: time="2025-09-12T17:13:56.851735051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:56.853393 containerd[1599]: time="2025-09-12T17:13:56.853338815Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954933" Sep 12 17:13:56.854344 containerd[1599]: time="2025-09-12T17:13:56.854288717Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:56.858554 containerd[1599]: time="2025-09-12T17:13:56.857206451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:56.858554 containerd[1599]: time="2025-09-12T17:13:56.858198369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.313137296s" Sep 12 17:13:56.858554 containerd[1599]: time="2025-09-12T17:13:56.858253912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:13:56.858914 containerd[1599]: time="2025-09-12T17:13:56.858887206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:13:57.432109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295780570.mount: Deactivated successfully. Sep 12 17:13:58.172871 containerd[1599]: time="2025-09-12T17:13:58.171478780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.173901 containerd[1599]: time="2025-09-12T17:13:58.173866975Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 12 17:13:58.175520 containerd[1599]: time="2025-09-12T17:13:58.175493090Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.185024 containerd[1599]: time="2025-09-12T17:13:58.184953593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.186106 containerd[1599]: time="2025-09-12T17:13:58.186063199Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.327139258s" Sep 12 17:13:58.186229 containerd[1599]: time="2025-09-12T17:13:58.186210693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:13:58.187282 containerd[1599]: time="2025-09-12T17:13:58.187227665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:13:58.664601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664040199.mount: Deactivated successfully. Sep 12 17:13:58.672248 containerd[1599]: time="2025-09-12T17:13:58.672191349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.673379 containerd[1599]: time="2025-09-12T17:13:58.673345291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 12 17:13:58.674304 containerd[1599]: time="2025-09-12T17:13:58.674276752Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.679956 containerd[1599]: time="2025-09-12T17:13:58.679901972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:58.680846 containerd[1599]: time="2025-09-12T17:13:58.680783454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.36848ms" Sep 12 17:13:58.680982 containerd[1599]: time="2025-09-12T17:13:58.680824709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:13:58.682172 containerd[1599]: time="2025-09-12T17:13:58.682107739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:13:59.261064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315448506.mount: Deactivated successfully. Sep 12 17:14:00.667258 containerd[1599]: time="2025-09-12T17:14:00.667186106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.668509 containerd[1599]: time="2025-09-12T17:14:00.668475177Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 12 17:14:00.669688 containerd[1599]: time="2025-09-12T17:14:00.669390803Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.676080 containerd[1599]: time="2025-09-12T17:14:00.676037906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.678435 containerd[1599]: time="2025-09-12T17:14:00.678382249Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.996213688s" Sep 12 17:14:00.678615 containerd[1599]: time="2025-09-12T17:14:00.678573713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:14:01.136284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:14:01.143181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:01.289123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:01.305467 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:01.356458 kubelet[2258]: E0912 17:14:01.356387 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:01.360978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:01.361326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:05.050977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:05.066944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:05.110771 systemd[1]: Reloading requested from client PID 2288 ('systemctl') (unit session-7.scope)... Sep 12 17:14:05.110788 systemd[1]: Reloading... Sep 12 17:14:05.237866 zram_generator::config[2338]: No configuration found. Sep 12 17:14:05.336593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:14:05.415125 systemd[1]: Reloading finished in 303 ms. Sep 12 17:14:05.463521 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:14:05.463593 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:14:05.464043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:05.474370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:05.609164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:05.620329 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:14:05.667647 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:05.667647 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:14:05.667647 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:05.668474 kubelet[2385]: I0912 17:14:05.667725 2385 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:14:06.594558 kubelet[2385]: I0912 17:14:06.594487 2385 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:14:06.594558 kubelet[2385]: I0912 17:14:06.594534 2385 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:14:06.594959 kubelet[2385]: I0912 17:14:06.594923 2385 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:14:06.620920 kubelet[2385]: E0912 17:14:06.620688 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://5.75.227.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:06.620920 kubelet[2385]: I0912 17:14:06.620710 2385 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:14:06.636916 kubelet[2385]: E0912 17:14:06.636869 2385 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:14:06.636916 kubelet[2385]: I0912 17:14:06.636910 2385 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:14:06.641021 kubelet[2385]: I0912 17:14:06.640970 2385 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:14:06.643432 kubelet[2385]: I0912 17:14:06.643374 2385 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:14:06.643878 kubelet[2385]: I0912 17:14:06.643756 2385 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:14:06.644291 kubelet[2385]: I0912 17:14:06.643857 2385 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-0-ae88ce84d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:14:06.644491 kubelet[2385]: I0912 17:14:06.644423 2385 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:14:06.644491 kubelet[2385]: I0912 17:14:06.644452 2385 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:14:06.644856 kubelet[2385]: I0912 17:14:06.644797 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:06.649862 kubelet[2385]: I0912 17:14:06.649614 2385 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:14:06.649862 kubelet[2385]: I0912 17:14:06.649659 2385 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:14:06.649862 kubelet[2385]: I0912 17:14:06.649687 2385 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:14:06.649862 kubelet[2385]: I0912 17:14:06.649768 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:14:06.652326 kubelet[2385]: W0912 17:14:06.652240 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.227.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-0-ae88ce84d6&limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:06.652507 kubelet[2385]: E0912 17:14:06.652481 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.227.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-0-ae88ce84d6&limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:06.654876 kubelet[2385]: W0912 17:14:06.653805 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://5.75.227.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:06.654876 kubelet[2385]: E0912 17:14:06.653879 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://5.75.227.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:06.656867 kubelet[2385]: I0912 17:14:06.655428 2385 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:14:06.656867 kubelet[2385]: I0912 17:14:06.656157 2385 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:14:06.656867 kubelet[2385]: W0912 17:14:06.656354 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:14:06.657520 kubelet[2385]: I0912 17:14:06.657496 2385 server.go:1274] "Started kubelet" Sep 12 17:14:06.661421 kubelet[2385]: I0912 17:14:06.661365 2385 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:14:06.662374 kubelet[2385]: I0912 17:14:06.662310 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:14:06.662780 kubelet[2385]: I0912 17:14:06.662741 2385 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:14:06.662952 kubelet[2385]: I0912 17:14:06.662937 2385 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:14:06.664456 kubelet[2385]: E0912 17:14:06.663166 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://5.75.227.222:6443/api/v1/namespaces/default/events\": dial tcp 5.75.227.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-0-ae88ce84d6.1864985549b365cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-0-ae88ce84d6,UID:ci-4081-3-6-0-ae88ce84d6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-0-ae88ce84d6,},FirstTimestamp:2025-09-12 17:14:06.657471951 +0000 UTC m=+1.033120783,LastTimestamp:2025-09-12 17:14:06.657471951 +0000 UTC m=+1.033120783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-0-ae88ce84d6,}" Sep 12 17:14:06.667293 kubelet[2385]: I0912 17:14:06.667241 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:14:06.667755 kubelet[2385]: I0912 17:14:06.667712 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:14:06.672160 kubelet[2385]: E0912 17:14:06.672136 2385 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:14:06.672560 kubelet[2385]: E0912 17:14:06.672545 2385 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-0-ae88ce84d6\" not found" Sep 12 17:14:06.672681 kubelet[2385]: I0912 17:14:06.672661 2385 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:14:06.673016 kubelet[2385]: I0912 17:14:06.672999 2385 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:14:06.673871 kubelet[2385]: I0912 17:14:06.673141 2385 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:14:06.673871 kubelet[2385]: W0912 17:14:06.673634 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.227.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:06.673871 kubelet[2385]: E0912 17:14:06.673678 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.227.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:06.674306 kubelet[2385]: E0912 17:14:06.674241 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": dial tcp 5.75.227.222:6443: connect: connection refused" interval="200ms" Sep 12 17:14:06.674588 kubelet[2385]: I0912 17:14:06.674572 2385 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:14:06.674731 kubelet[2385]: I0912 17:14:06.674716 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:14:06.676641 kubelet[2385]: I0912 17:14:06.676625 2385 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:14:06.697938 kubelet[2385]: I0912 17:14:06.697867 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:14:06.699331 kubelet[2385]: I0912 17:14:06.699292 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:14:06.699331 kubelet[2385]: I0912 17:14:06.699328 2385 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:14:06.699454 kubelet[2385]: I0912 17:14:06.699352 2385 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:14:06.699454 kubelet[2385]: E0912 17:14:06.699407 2385 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:14:06.706042 kubelet[2385]: W0912 17:14:06.705963 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.227.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:06.706482 kubelet[2385]: E0912 17:14:06.706012 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.227.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:06.707263 kubelet[2385]: I0912 17:14:06.707243 2385 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:14:06.707384 kubelet[2385]: I0912 17:14:06.707371 2385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:14:06.707447 kubelet[2385]: I0912 17:14:06.707439 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:06.711875 kubelet[2385]: I0912 17:14:06.711813 2385 policy_none.go:49] "None policy: Start" Sep 12 17:14:06.713226 kubelet[2385]: I0912 17:14:06.713190 2385 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:14:06.713343 kubelet[2385]: I0912 17:14:06.713237 2385 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:14:06.721426 kubelet[2385]: I0912 17:14:06.721360 2385 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:14:06.721752 kubelet[2385]: I0912 17:14:06.721702 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:14:06.721886 kubelet[2385]: I0912 17:14:06.721737 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:14:06.727652 kubelet[2385]: I0912 17:14:06.727587 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:14:06.731723 kubelet[2385]: E0912 17:14:06.731685 2385 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-0-ae88ce84d6\" not found" Sep 12 17:14:06.825873 kubelet[2385]: I0912 17:14:06.825072 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.825873 kubelet[2385]: E0912 17:14:06.825792 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.227.222:6443/api/v1/nodes\": dial tcp 5.75.227.222:6443: connect: connection refused" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.873956 kubelet[2385]: I0912 17:14:06.873755 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.873956 kubelet[2385]: I0912 17:14:06.873864 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.873956 kubelet[2385]: I0912 17:14:06.873911 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.874859 kubelet[2385]: I0912 17:14:06.873966 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.874859 kubelet[2385]: I0912 17:14:06.874009 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9cbdac9f2e0ca77e2a208f4c52eb889-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-0-ae88ce84d6\" (UID: \"b9cbdac9f2e0ca77e2a208f4c52eb889\") " pod="kube-system/kube-scheduler-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.874859 kubelet[2385]: I0912 17:14:06.874066 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.874859 kubelet[2385]: I0912 17:14:06.874100 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.874859 kubelet[2385]: I0912 17:14:06.874134 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.875113 kubelet[2385]: I0912 17:14:06.874168 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:06.875688 kubelet[2385]: E0912 17:14:06.875618 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": dial tcp 5.75.227.222:6443: connect: connection refused" interval="400ms" Sep 12 17:14:07.029733 kubelet[2385]: I0912 17:14:07.029211 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:07.029733 kubelet[2385]: E0912 17:14:07.029688 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.227.222:6443/api/v1/nodes\": dial tcp 5.75.227.222:6443: connect: connection refused" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:07.109875 containerd[1599]: time="2025-09-12T17:14:07.109789679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-0-ae88ce84d6,Uid:c25d8ab718eeb168bb9c7302b6f298cf,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:07.116027 containerd[1599]: time="2025-09-12T17:14:07.115671715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-0-ae88ce84d6,Uid:b9cbdac9f2e0ca77e2a208f4c52eb889,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:07.116487 containerd[1599]: time="2025-09-12T17:14:07.116458912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-0-ae88ce84d6,Uid:f33701b81866c9a553ad70af38246e1c,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:07.277303 kubelet[2385]: E0912 17:14:07.277196 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": dial tcp 5.75.227.222:6443: connect: connection refused" interval="800ms" Sep 12 17:14:07.433399 kubelet[2385]: I0912 17:14:07.433350 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:07.433808 kubelet[2385]: E0912 17:14:07.433745 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.227.222:6443/api/v1/nodes\": dial tcp 5.75.227.222:6443: connect: connection refused" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:07.479106 kubelet[2385]: W0912 17:14:07.478975 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://5.75.227.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:07.479106 kubelet[2385]: E0912 17:14:07.479101 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://5.75.227.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:07.668665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210255337.mount: Deactivated successfully. Sep 12 17:14:07.677915 containerd[1599]: time="2025-09-12T17:14:07.676817753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:07.693343 containerd[1599]: time="2025-09-12T17:14:07.693242593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 12 17:14:07.694530 containerd[1599]: time="2025-09-12T17:14:07.694266650Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:07.701851 containerd[1599]: time="2025-09-12T17:14:07.700563349Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:07.705449 containerd[1599]: time="2025-09-12T17:14:07.705367874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:07.706664 containerd[1599]: time="2025-09-12T17:14:07.706478553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:14:07.708482 containerd[1599]: time="2025-09-12T17:14:07.708197904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:14:07.712514 containerd[1599]: time="2025-09-12T17:14:07.712465335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:07.715302 containerd[1599]: time="2025-09-12T17:14:07.715237150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.464249ms" Sep 12 17:14:07.716041 containerd[1599]: time="2025-09-12T17:14:07.715981457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.341019ms" Sep 12 17:14:07.717493 containerd[1599]: time="2025-09-12T17:14:07.717441703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.490503ms" Sep 12 17:14:07.902730 kubelet[2385]: W0912 17:14:07.898718 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.227.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:07.902730 kubelet[2385]: E0912 17:14:07.898860 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.227.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:07.938500 containerd[1599]: time="2025-09-12T17:14:07.937318097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:07.938500 containerd[1599]: time="2025-09-12T17:14:07.937379472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:07.938500 containerd[1599]: time="2025-09-12T17:14:07.937394716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:07.938500 containerd[1599]: time="2025-09-12T17:14:07.938148105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:07.942846 containerd[1599]: time="2025-09-12T17:14:07.942511999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:07.942846 containerd[1599]: time="2025-09-12T17:14:07.942635270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:07.942846 containerd[1599]: time="2025-09-12T17:14:07.942665078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:07.942846 containerd[1599]: time="2025-09-12T17:14:07.942781907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:07.948945 containerd[1599]: time="2025-09-12T17:14:07.948690069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:07.948945 containerd[1599]: time="2025-09-12T17:14:07.948749284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:07.948945 containerd[1599]: time="2025-09-12T17:14:07.948760647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:07.949884 containerd[1599]: time="2025-09-12T17:14:07.949195436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:08.010721 containerd[1599]: time="2025-09-12T17:14:08.010677208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-0-ae88ce84d6,Uid:c25d8ab718eeb168bb9c7302b6f298cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b2bcd7c64736071e49bf3c135f2aeb17552ae65585240ad7062bdaa8f00d150\"" Sep 12 17:14:08.017858 containerd[1599]: time="2025-09-12T17:14:08.017751238Z" level=info msg="CreateContainer within sandbox \"4b2bcd7c64736071e49bf3c135f2aeb17552ae65585240ad7062bdaa8f00d150\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:14:08.021126 containerd[1599]: time="2025-09-12T17:14:08.021082243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-0-ae88ce84d6,Uid:f33701b81866c9a553ad70af38246e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe991738d78648634be60976edb45c499853b91f57f622f6e22dcccf325e2d5\"" Sep 12 17:14:08.028814 containerd[1599]: time="2025-09-12T17:14:08.025883003Z" level=info msg="CreateContainer within sandbox \"dbe991738d78648634be60976edb45c499853b91f57f622f6e22dcccf325e2d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:14:08.038796 kubelet[2385]: W0912 17:14:08.036740 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.227.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-0-ae88ce84d6&limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:08.038796 kubelet[2385]: E0912 17:14:08.037900 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.227.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-0-ae88ce84d6&limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:08.051712 containerd[1599]: time="2025-09-12T17:14:08.051442341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-0-ae88ce84d6,Uid:b9cbdac9f2e0ca77e2a208f4c52eb889,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ad27f7db0a8843a866ef6fb5ee7e001fe46449136eb58625d6418ca9c42595c\"" Sep 12 17:14:08.054806 containerd[1599]: time="2025-09-12T17:14:08.054772706Z" level=info msg="CreateContainer within sandbox \"4b2bcd7c64736071e49bf3c135f2aeb17552ae65585240ad7062bdaa8f00d150\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71133269839f6fe21ea7c7c221940785137644ddd21b0e28e89b587462678b26\"" Sep 12 17:14:08.055348 containerd[1599]: time="2025-09-12T17:14:08.055318558Z" level=info msg="CreateContainer within sandbox \"7ad27f7db0a8843a866ef6fb5ee7e001fe46449136eb58625d6418ca9c42595c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:14:08.056777 containerd[1599]: time="2025-09-12T17:14:08.055980157Z" level=info msg="StartContainer for \"71133269839f6fe21ea7c7c221940785137644ddd21b0e28e89b587462678b26\"" Sep 12 17:14:08.071678 containerd[1599]: time="2025-09-12T17:14:08.071614696Z" level=info msg="CreateContainer within sandbox \"dbe991738d78648634be60976edb45c499853b91f57f622f6e22dcccf325e2d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5\"" Sep 12 17:14:08.072630 containerd[1599]: time="2025-09-12T17:14:08.072567327Z" level=info msg="StartContainer for \"c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5\"" Sep 12 17:14:08.078881 kubelet[2385]: E0912 17:14:08.078784 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": dial tcp 5.75.227.222:6443: connect: connection refused" interval="1.6s" Sep 12 17:14:08.083376 containerd[1599]: time="2025-09-12T17:14:08.083309083Z" level=info msg="CreateContainer within sandbox \"7ad27f7db0a8843a866ef6fb5ee7e001fe46449136eb58625d6418ca9c42595c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e\"" Sep 12 17:14:08.084840 containerd[1599]: time="2025-09-12T17:14:08.084749911Z" level=info msg="StartContainer for \"7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e\"" Sep 12 17:14:08.146425 containerd[1599]: time="2025-09-12T17:14:08.146245494Z" level=info msg="StartContainer for \"71133269839f6fe21ea7c7c221940785137644ddd21b0e28e89b587462678b26\" returns successfully" Sep 12 17:14:08.169475 kubelet[2385]: W0912 17:14:08.168962 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.227.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.227.222:6443: connect: connection refused Sep 12 17:14:08.169475 kubelet[2385]: E0912 17:14:08.169037 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.227.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.227.222:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:14:08.184009 containerd[1599]: time="2025-09-12T17:14:08.183950287Z" level=info msg="StartContainer for \"c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5\" returns successfully" Sep 12 17:14:08.222280 containerd[1599]: time="2025-09-12T17:14:08.222162643Z" level=info msg="StartContainer for \"7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e\" returns successfully" Sep 12 17:14:08.239650 kubelet[2385]: I0912 17:14:08.238516 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:08.239650 kubelet[2385]: E0912 17:14:08.239027 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.227.222:6443/api/v1/nodes\": dial tcp 5.75.227.222:6443: connect: connection refused" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:09.843125 kubelet[2385]: I0912 17:14:09.843087 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:10.158119 kubelet[2385]: E0912 17:14:10.158061 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-0-ae88ce84d6\" not found" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:10.304428 kubelet[2385]: I0912 17:14:10.304367 2385 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:10.304428 kubelet[2385]: E0912 17:14:10.304425 2385 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-0-ae88ce84d6\": node \"ci-4081-3-6-0-ae88ce84d6\" not found" Sep 12 17:14:10.655213 kubelet[2385]: I0912 17:14:10.655160 2385 apiserver.go:52] "Watching apiserver" Sep 12 17:14:10.674096 kubelet[2385]: I0912 17:14:10.674030 2385 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:14:12.579436 systemd[1]: Reloading requested from client PID 2660 ('systemctl') (unit session-7.scope)... Sep 12 17:14:12.579452 systemd[1]: Reloading... Sep 12 17:14:12.680872 zram_generator::config[2700]: No configuration found. Sep 12 17:14:12.800267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:14:12.887987 systemd[1]: Reloading finished in 308 ms. Sep 12 17:14:12.923290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:12.933622 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:14:12.934365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:12.944389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:13.101124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:13.114327 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:14:13.179313 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:13.179313 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:14:13.179313 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:13.181058 kubelet[2755]: I0912 17:14:13.179261 2755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:14:13.191437 kubelet[2755]: I0912 17:14:13.191374 2755 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:14:13.191437 kubelet[2755]: I0912 17:14:13.191418 2755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:14:13.191745 kubelet[2755]: I0912 17:14:13.191727 2755 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:14:13.193479 kubelet[2755]: I0912 17:14:13.193448 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:14:13.200559 kubelet[2755]: I0912 17:14:13.200251 2755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:14:13.208822 kubelet[2755]: E0912 17:14:13.208760 2755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:14:13.208822 kubelet[2755]: I0912 17:14:13.208819 2755 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:14:13.211674 kubelet[2755]: I0912 17:14:13.211642 2755 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:14:13.212853 kubelet[2755]: I0912 17:14:13.212767 2755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:14:13.213006 kubelet[2755]: I0912 17:14:13.212929 2755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:14:13.218212 kubelet[2755]: I0912 17:14:13.212963 2755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-0-ae88ce84d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:14:13.218212 kubelet[2755]: I0912 17:14:13.213203 2755 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:14:13.218212 kubelet[2755]: I0912 17:14:13.213227 2755 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:14:13.218212 kubelet[2755]: I0912 17:14:13.213273 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:13.218212 kubelet[2755]: I0912 17:14:13.213449 2755 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:14:13.218895 kubelet[2755]: I0912 17:14:13.213483 2755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:14:13.218895 kubelet[2755]: I0912 17:14:13.213505 2755 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:14:13.218895 kubelet[2755]: I0912 17:14:13.213519 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:14:13.221511 kubelet[2755]: I0912 17:14:13.220295 2755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:14:13.221511 kubelet[2755]: I0912 17:14:13.220886 2755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:14:13.221651 kubelet[2755]: I0912 17:14:13.221529 2755 server.go:1274] "Started kubelet" Sep 12 17:14:13.226634 kubelet[2755]: I0912 17:14:13.225710 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:14:13.238968 kubelet[2755]: I0912 17:14:13.238907 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:14:13.240667 kubelet[2755]: I0912 17:14:13.240639 2755 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:14:13.242025 kubelet[2755]: I0912 17:14:13.241962 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:14:13.242220 kubelet[2755]: I0912 17:14:13.242201 2755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:14:13.243997 kubelet[2755]: I0912 17:14:13.242724 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:14:13.244501 kubelet[2755]: I0912 17:14:13.244447 2755 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:14:13.244740 kubelet[2755]: E0912 17:14:13.244712 2755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-0-ae88ce84d6\" not found" Sep 12 17:14:13.245288 kubelet[2755]: I0912 17:14:13.245257 2755 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:14:13.247120 kubelet[2755]: I0912 17:14:13.245400 2755 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:14:13.250585 kubelet[2755]: I0912 17:14:13.250542 2755 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:14:13.252950 kubelet[2755]: I0912 17:14:13.250714 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:14:13.254885 kubelet[2755]: E0912 17:14:13.254818 2755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:14:13.255349 kubelet[2755]: I0912 17:14:13.255306 2755 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:14:13.263842 kubelet[2755]: I0912 17:14:13.263779 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:14:13.272489 kubelet[2755]: I0912 17:14:13.271726 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:14:13.272489 kubelet[2755]: I0912 17:14:13.271757 2755 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:14:13.272489 kubelet[2755]: I0912 17:14:13.271780 2755 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:14:13.272489 kubelet[2755]: E0912 17:14:13.271853 2755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:14:13.336097 kubelet[2755]: I0912 17:14:13.336064 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:14:13.336341 kubelet[2755]: I0912 17:14:13.336327 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:14:13.336407 kubelet[2755]: I0912 17:14:13.336399 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:13.336658 kubelet[2755]: I0912 17:14:13.336643 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:14:13.336791 kubelet[2755]: I0912 17:14:13.336714 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:14:13.336952 kubelet[2755]: I0912 17:14:13.336941 2755 policy_none.go:49] "None policy: Start" Sep 12 17:14:13.338262 kubelet[2755]: I0912 17:14:13.338225 2755 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:14:13.338457 kubelet[2755]: I0912 17:14:13.338271 2755 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:14:13.338538 kubelet[2755]: I0912 17:14:13.338515 2755 state_mem.go:75] "Updated machine memory state" Sep 12 17:14:13.340581 kubelet[2755]: I0912 17:14:13.340549 2755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:14:13.340813 kubelet[2755]: I0912 17:14:13.340793 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:14:13.341843 kubelet[2755]: I0912 17:14:13.340814 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:14:13.341843 kubelet[2755]: I0912 17:14:13.341117 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:14:13.386858 kubelet[2755]: E0912 17:14:13.386793 2755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447565 kubelet[2755]: I0912 17:14:13.446324 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447565 kubelet[2755]: I0912 17:14:13.446375 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9cbdac9f2e0ca77e2a208f4c52eb889-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-0-ae88ce84d6\" (UID: \"b9cbdac9f2e0ca77e2a208f4c52eb889\") " pod="kube-system/kube-scheduler-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447565 kubelet[2755]: I0912 17:14:13.446408 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447565 kubelet[2755]: I0912 17:14:13.446434 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447565 kubelet[2755]: I0912 17:14:13.446462 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447822 kubelet[2755]: I0912 17:14:13.446490 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447822 kubelet[2755]: I0912 17:14:13.446521 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25d8ab718eeb168bb9c7302b6f298cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" (UID: \"c25d8ab718eeb168bb9c7302b6f298cf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447822 kubelet[2755]: I0912 17:14:13.446547 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447822 kubelet[2755]: I0912 17:14:13.446573 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f33701b81866c9a553ad70af38246e1c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" (UID: \"f33701b81866c9a553ad70af38246e1c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.447822 kubelet[2755]: I0912 17:14:13.447035 2755 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.463116 kubelet[2755]: I0912 17:14:13.463010 2755 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:13.463437 kubelet[2755]: I0912 17:14:13.463265 2755 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:14.216124 kubelet[2755]: I0912 17:14:14.215987 2755 apiserver.go:52] "Watching apiserver" Sep 12 17:14:14.245938 kubelet[2755]: I0912 17:14:14.245883 2755 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:14:14.327397 kubelet[2755]: E0912 17:14:14.327169 2755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-6-0-ae88ce84d6\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:14.330006 kubelet[2755]: E0912 17:14:14.329705 2755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-6-0-ae88ce84d6\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:14.362769 kubelet[2755]: I0912 17:14:14.361470 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-0-ae88ce84d6" podStartSLOduration=1.361435322 podStartE2EDuration="1.361435322s" podCreationTimestamp="2025-09-12 17:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:14:14.359992197 +0000 UTC m=+1.240799723" watchObservedRunningTime="2025-09-12 17:14:14.361435322 +0000 UTC m=+1.242242888" Sep 12 17:14:14.363117 kubelet[2755]: I0912 17:14:14.362987 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-0-ae88ce84d6" podStartSLOduration=1.362962264 podStartE2EDuration="1.362962264s" podCreationTimestamp="2025-09-12 17:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:14:14.348132973 +0000 UTC m=+1.228940499" watchObservedRunningTime="2025-09-12 17:14:14.362962264 +0000 UTC m=+1.243769830" Sep 12 17:14:18.518817 kubelet[2755]: I0912 17:14:18.518641 2755 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:14:18.519826 containerd[1599]: time="2025-09-12T17:14:18.519466296Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:14:18.520700 kubelet[2755]: I0912 17:14:18.520180 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:14:19.376687 kubelet[2755]: I0912 17:14:19.376602 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-0-ae88ce84d6" podStartSLOduration=8.376578413 podStartE2EDuration="8.376578413s" podCreationTimestamp="2025-09-12 17:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:14:14.392480579 +0000 UTC m=+1.273288105" watchObservedRunningTime="2025-09-12 17:14:19.376578413 +0000 UTC m=+6.257385939" Sep 12 17:14:19.386010 kubelet[2755]: I0912 17:14:19.385962 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8560573a-4ed8-4e82-9d77-9524a652bc24-xtables-lock\") pod \"kube-proxy-qnl5z\" (UID: \"8560573a-4ed8-4e82-9d77-9524a652bc24\") " pod="kube-system/kube-proxy-qnl5z" Sep 12 17:14:19.391866 kubelet[2755]: I0912 17:14:19.386635 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8560573a-4ed8-4e82-9d77-9524a652bc24-lib-modules\") pod \"kube-proxy-qnl5z\" (UID: \"8560573a-4ed8-4e82-9d77-9524a652bc24\") " pod="kube-system/kube-proxy-qnl5z" Sep 12 17:14:19.391866 kubelet[2755]: I0912 17:14:19.390129 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8560573a-4ed8-4e82-9d77-9524a652bc24-kube-proxy\") pod \"kube-proxy-qnl5z\" (UID: \"8560573a-4ed8-4e82-9d77-9524a652bc24\") " pod="kube-system/kube-proxy-qnl5z" Sep 12 17:14:19.490493 kubelet[2755]: I0912 17:14:19.490381 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jkx\" (UniqueName: \"kubernetes.io/projected/8560573a-4ed8-4e82-9d77-9524a652bc24-kube-api-access-67jkx\") pod \"kube-proxy-qnl5z\" (UID: \"8560573a-4ed8-4e82-9d77-9524a652bc24\") " pod="kube-system/kube-proxy-qnl5z" Sep 12 17:14:19.689264 containerd[1599]: time="2025-09-12T17:14:19.689215416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnl5z,Uid:8560573a-4ed8-4e82-9d77-9524a652bc24,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:19.693279 kubelet[2755]: I0912 17:14:19.693111 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtgq\" (UniqueName: \"kubernetes.io/projected/3af485b9-c78a-4eca-b805-e549d767acbe-kube-api-access-kxtgq\") pod \"tigera-operator-58fc44c59b-ln5c9\" (UID: \"3af485b9-c78a-4eca-b805-e549d767acbe\") " pod="tigera-operator/tigera-operator-58fc44c59b-ln5c9" Sep 12 17:14:19.693279 kubelet[2755]: I0912 17:14:19.693166 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3af485b9-c78a-4eca-b805-e549d767acbe-var-lib-calico\") pod \"tigera-operator-58fc44c59b-ln5c9\" (UID: \"3af485b9-c78a-4eca-b805-e549d767acbe\") " pod="tigera-operator/tigera-operator-58fc44c59b-ln5c9" Sep 12 17:14:19.721391 containerd[1599]: time="2025-09-12T17:14:19.721118782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:19.721391 containerd[1599]: time="2025-09-12T17:14:19.721191994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:19.721391 containerd[1599]: time="2025-09-12T17:14:19.721212878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:19.721391 containerd[1599]: time="2025-09-12T17:14:19.721327938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:19.742430 systemd[1]: run-containerd-runc-k8s.io-aad202d583e1ebd4aa3e44c734fe80af6f1f1f8df07119555560b97ef0626519-runc.yV601d.mount: Deactivated successfully. Sep 12 17:14:19.765620 containerd[1599]: time="2025-09-12T17:14:19.765566425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnl5z,Uid:8560573a-4ed8-4e82-9d77-9524a652bc24,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad202d583e1ebd4aa3e44c734fe80af6f1f1f8df07119555560b97ef0626519\"" Sep 12 17:14:19.771169 containerd[1599]: time="2025-09-12T17:14:19.771123101Z" level=info msg="CreateContainer within sandbox \"aad202d583e1ebd4aa3e44c734fe80af6f1f1f8df07119555560b97ef0626519\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:14:19.796758 containerd[1599]: time="2025-09-12T17:14:19.796495304Z" level=info msg="CreateContainer within sandbox \"aad202d583e1ebd4aa3e44c734fe80af6f1f1f8df07119555560b97ef0626519\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"faf68cfc022d21ca486ed5c77779fb07fa96c1f4ae66ad188ec0135d11cc8245\"" Sep 12 17:14:19.798234 containerd[1599]: time="2025-09-12T17:14:19.798190795Z" level=info msg="StartContainer for \"faf68cfc022d21ca486ed5c77779fb07fa96c1f4ae66ad188ec0135d11cc8245\"" Sep 12 17:14:19.861693 containerd[1599]: time="2025-09-12T17:14:19.861635866Z" level=info msg="StartContainer for \"faf68cfc022d21ca486ed5c77779fb07fa96c1f4ae66ad188ec0135d11cc8245\" returns successfully" Sep 12 17:14:19.990987 containerd[1599]: time="2025-09-12T17:14:19.989559184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ln5c9,Uid:3af485b9-c78a-4eca-b805-e549d767acbe,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:14:20.022514 containerd[1599]: time="2025-09-12T17:14:20.022410342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:20.022700 containerd[1599]: time="2025-09-12T17:14:20.022494396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:20.022878 containerd[1599]: time="2025-09-12T17:14:20.022758121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:20.023123 containerd[1599]: time="2025-09-12T17:14:20.023084495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:20.087790 containerd[1599]: time="2025-09-12T17:14:20.087659808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ln5c9,Uid:3af485b9-c78a-4eca-b805-e549d767acbe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4bdea02d954ad85dd5bcef8b2218d5dda08c358e83ea9ab38bcc6733ae245de3\"" Sep 12 17:14:20.090359 containerd[1599]: time="2025-09-12T17:14:20.090165428Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:14:22.172771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655213826.mount: Deactivated successfully. Sep 12 17:14:22.482791 kubelet[2755]: I0912 17:14:22.482629 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnl5z" podStartSLOduration=3.482601753 podStartE2EDuration="3.482601753s" podCreationTimestamp="2025-09-12 17:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:14:20.363270801 +0000 UTC m=+7.244078367" watchObservedRunningTime="2025-09-12 17:14:22.482601753 +0000 UTC m=+9.363409279" Sep 12 17:14:22.548937 containerd[1599]: time="2025-09-12T17:14:22.546583836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:22.548937 containerd[1599]: time="2025-09-12T17:14:22.547768026Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:14:22.548937 containerd[1599]: time="2025-09-12T17:14:22.548062113Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:22.552731 containerd[1599]: time="2025-09-12T17:14:22.552654728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:22.554326 containerd[1599]: time="2025-09-12T17:14:22.554230500Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.464007223s" Sep 12 17:14:22.554428 containerd[1599]: time="2025-09-12T17:14:22.554341758Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:14:22.558682 containerd[1599]: time="2025-09-12T17:14:22.558618883Z" level=info msg="CreateContainer within sandbox \"4bdea02d954ad85dd5bcef8b2218d5dda08c358e83ea9ab38bcc6733ae245de3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:14:22.572526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330598930.mount: Deactivated successfully. Sep 12 17:14:22.576264 containerd[1599]: time="2025-09-12T17:14:22.576118524Z" level=info msg="CreateContainer within sandbox \"4bdea02d954ad85dd5bcef8b2218d5dda08c358e83ea9ab38bcc6733ae245de3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de\"" Sep 12 17:14:22.577971 containerd[1599]: time="2025-09-12T17:14:22.577860803Z" level=info msg="StartContainer for \"e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de\"" Sep 12 17:14:22.645725 containerd[1599]: time="2025-09-12T17:14:22.645659617Z" level=info msg="StartContainer for \"e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de\" returns successfully" Sep 12 17:14:23.370874 kubelet[2755]: I0912 17:14:23.370302 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-ln5c9" podStartSLOduration=1.903981664 podStartE2EDuration="4.370279779s" podCreationTimestamp="2025-09-12 17:14:19 +0000 UTC" firstStartedPulling="2025-09-12 17:14:20.089465911 +0000 UTC m=+6.970273437" lastFinishedPulling="2025-09-12 17:14:22.555764066 +0000 UTC m=+9.436571552" observedRunningTime="2025-09-12 17:14:23.368612918 +0000 UTC m=+10.249420444" watchObservedRunningTime="2025-09-12 17:14:23.370279779 +0000 UTC m=+10.251087305" Sep 12 17:14:29.167102 sudo[1867]: pam_unix(sudo:session): session closed for user root Sep 12 17:14:29.336093 sshd[1863]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:29.348724 systemd[1]: sshd@6-5.75.227.222:22-139.178.89.65:52894.service: Deactivated successfully. Sep 12 17:14:29.356542 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:14:29.356884 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:14:29.359806 systemd-logind[1578]: Removed session 7. Sep 12 17:14:37.136721 kubelet[2755]: W0912 17:14:37.136589 2755 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-6-0-ae88ce84d6" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-0-ae88ce84d6' and this object Sep 12 17:14:37.136721 kubelet[2755]: E0912 17:14:37.136658 2755 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-6-0-ae88ce84d6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-0-ae88ce84d6' and this object" logger="UnhandledError" Sep 12 17:14:37.313585 kubelet[2755]: I0912 17:14:37.312279 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011839fb-2096-45c5-adf0-e71ca8be5482-tigera-ca-bundle\") pod \"calico-typha-7c994b957f-t2mmf\" (UID: \"011839fb-2096-45c5-adf0-e71ca8be5482\") " pod="calico-system/calico-typha-7c994b957f-t2mmf" Sep 12 17:14:37.313585 kubelet[2755]: I0912 17:14:37.312326 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-flexvol-driver-host\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313585 kubelet[2755]: I0912 17:14:37.312346 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-policysync\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313585 kubelet[2755]: I0912 17:14:37.312365 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-tigera-ca-bundle\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313585 kubelet[2755]: I0912 17:14:37.312381 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/011839fb-2096-45c5-adf0-e71ca8be5482-typha-certs\") pod \"calico-typha-7c994b957f-t2mmf\" (UID: \"011839fb-2096-45c5-adf0-e71ca8be5482\") " pod="calico-system/calico-typha-7c994b957f-t2mmf" Sep 12 17:14:37.313958 kubelet[2755]: I0912 17:14:37.312400 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-cni-net-dir\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313958 kubelet[2755]: I0912 17:14:37.312415 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-lib-modules\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313958 kubelet[2755]: I0912 17:14:37.312431 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkh62\" (UniqueName: \"kubernetes.io/projected/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-kube-api-access-pkh62\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313958 kubelet[2755]: I0912 17:14:37.312448 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-cni-bin-dir\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.313958 kubelet[2755]: I0912 17:14:37.312462 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-xtables-lock\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.314088 kubelet[2755]: I0912 17:14:37.312478 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-var-run-calico\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.314088 kubelet[2755]: I0912 17:14:37.312494 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-node-certs\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.314088 kubelet[2755]: I0912 17:14:37.312508 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-var-lib-calico\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.314088 kubelet[2755]: I0912 17:14:37.312528 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aaaf3aec-275d-4f92-a0c8-5cff3c41a264-cni-log-dir\") pod \"calico-node-w2j9p\" (UID: \"aaaf3aec-275d-4f92-a0c8-5cff3c41a264\") " pod="calico-system/calico-node-w2j9p" Sep 12 17:14:37.314088 kubelet[2755]: I0912 17:14:37.312546 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvjff\" (UniqueName: \"kubernetes.io/projected/011839fb-2096-45c5-adf0-e71ca8be5482-kube-api-access-vvjff\") pod \"calico-typha-7c994b957f-t2mmf\" (UID: \"011839fb-2096-45c5-adf0-e71ca8be5482\") " pod="calico-system/calico-typha-7c994b957f-t2mmf" Sep 12 17:14:37.422491 kubelet[2755]: E0912 17:14:37.422153 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.422491 kubelet[2755]: W0912 17:14:37.422267 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.424219 kubelet[2755]: E0912 17:14:37.422294 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.424931 kubelet[2755]: E0912 17:14:37.424565 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.424931 kubelet[2755]: W0912 17:14:37.424889 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.425680 kubelet[2755]: E0912 17:14:37.424916 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.427293 kubelet[2755]: E0912 17:14:37.427249 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.427293 kubelet[2755]: W0912 17:14:37.427267 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.427986 kubelet[2755]: E0912 17:14:37.427421 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.430100 kubelet[2755]: E0912 17:14:37.430030 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.430100 kubelet[2755]: W0912 17:14:37.430048 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.430100 kubelet[2755]: E0912 17:14:37.430067 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.446004 kubelet[2755]: E0912 17:14:37.445819 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.446004 kubelet[2755]: W0912 17:14:37.445867 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.447951 kubelet[2755]: E0912 17:14:37.447892 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.463608 kubelet[2755]: E0912 17:14:37.463543 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.463608 kubelet[2755]: W0912 17:14:37.463596 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.463815 kubelet[2755]: E0912 17:14:37.463630 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.473908 kubelet[2755]: E0912 17:14:37.473866 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.473908 kubelet[2755]: W0912 17:14:37.473900 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.475183 kubelet[2755]: E0912 17:14:37.475137 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.516295 kubelet[2755]: E0912 17:14:37.515812 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.516295 kubelet[2755]: W0912 17:14:37.515869 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.516295 kubelet[2755]: E0912 17:14:37.515897 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.518058 kubelet[2755]: E0912 17:14:37.518019 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.518058 kubelet[2755]: W0912 17:14:37.518052 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.518206 kubelet[2755]: E0912 17:14:37.518088 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.519593 kubelet[2755]: E0912 17:14:37.519558 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.519593 kubelet[2755]: W0912 17:14:37.519592 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.519720 kubelet[2755]: E0912 17:14:37.519648 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.520961 kubelet[2755]: E0912 17:14:37.520904 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.520961 kubelet[2755]: W0912 17:14:37.520956 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.521064 kubelet[2755]: E0912 17:14:37.520984 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.521603 kubelet[2755]: E0912 17:14:37.521412 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.521603 kubelet[2755]: W0912 17:14:37.521431 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.521603 kubelet[2755]: E0912 17:14:37.521445 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.521735 kubelet[2755]: E0912 17:14:37.521704 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.521735 kubelet[2755]: W0912 17:14:37.521724 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.521863 kubelet[2755]: E0912 17:14:37.521736 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.522895 kubelet[2755]: E0912 17:14:37.522382 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.522895 kubelet[2755]: W0912 17:14:37.522402 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.522895 kubelet[2755]: E0912 17:14:37.522417 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.595479 kubelet[2755]: E0912 17:14:37.595405 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:37.616069 kubelet[2755]: E0912 17:14:37.616010 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.616069 kubelet[2755]: W0912 17:14:37.616057 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.616290 kubelet[2755]: E0912 17:14:37.616111 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.616889 kubelet[2755]: E0912 17:14:37.616600 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.616889 kubelet[2755]: W0912 17:14:37.616633 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.616889 kubelet[2755]: E0912 17:14:37.616675 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.618261 kubelet[2755]: E0912 17:14:37.617549 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.618261 kubelet[2755]: W0912 17:14:37.617611 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.618261 kubelet[2755]: E0912 17:14:37.617642 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.619669 kubelet[2755]: E0912 17:14:37.619627 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.619669 kubelet[2755]: W0912 17:14:37.619653 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.619669 kubelet[2755]: E0912 17:14:37.619670 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.621146 kubelet[2755]: E0912 17:14:37.620639 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.621146 kubelet[2755]: W0912 17:14:37.620970 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.621146 kubelet[2755]: E0912 17:14:37.620994 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.622024 kubelet[2755]: E0912 17:14:37.621872 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.622024 kubelet[2755]: W0912 17:14:37.621891 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.622024 kubelet[2755]: E0912 17:14:37.621904 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.623050 kubelet[2755]: E0912 17:14:37.622805 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.623050 kubelet[2755]: W0912 17:14:37.622821 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.623050 kubelet[2755]: E0912 17:14:37.622887 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.623191 kubelet[2755]: E0912 17:14:37.623087 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.623191 kubelet[2755]: W0912 17:14:37.623116 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.623191 kubelet[2755]: E0912 17:14:37.623127 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.623477 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.625347 kubelet[2755]: W0912 17:14:37.623489 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.623499 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.623718 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.625347 kubelet[2755]: W0912 17:14:37.623729 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.623747 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.624512 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.625347 kubelet[2755]: W0912 17:14:37.624524 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.624535 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.625347 kubelet[2755]: E0912 17:14:37.624964 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.626957 kubelet[2755]: W0912 17:14:37.624975 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.626957 kubelet[2755]: E0912 17:14:37.624985 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.626957 kubelet[2755]: E0912 17:14:37.625763 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.626957 kubelet[2755]: W0912 17:14:37.625788 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.626957 kubelet[2755]: E0912 17:14:37.625801 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.626957 kubelet[2755]: E0912 17:14:37.626498 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.626957 kubelet[2755]: W0912 17:14:37.626512 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.626957 kubelet[2755]: E0912 17:14:37.626525 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.626957 kubelet[2755]: I0912 17:14:37.626549 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3eea0a07-67ac-4783-b82b-7eb9a72b754d-registration-dir\") pod \"csi-node-driver-62n5m\" (UID: \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\") " pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:37.627151 kubelet[2755]: E0912 17:14:37.626790 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.627151 kubelet[2755]: W0912 17:14:37.626802 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.627151 kubelet[2755]: E0912 17:14:37.626823 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.627151 kubelet[2755]: I0912 17:14:37.626911 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3eea0a07-67ac-4783-b82b-7eb9a72b754d-socket-dir\") pod \"csi-node-driver-62n5m\" (UID: \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\") " pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:37.627151 kubelet[2755]: E0912 17:14:37.627081 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.627151 kubelet[2755]: W0912 17:14:37.627094 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.627151 kubelet[2755]: E0912 17:14:37.627103 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.627151 kubelet[2755]: I0912 17:14:37.627123 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3eea0a07-67ac-4783-b82b-7eb9a72b754d-kubelet-dir\") pod \"csi-node-driver-62n5m\" (UID: \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\") " pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627419 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.630918 kubelet[2755]: W0912 17:14:37.627441 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627452 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627613 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.630918 kubelet[2755]: W0912 17:14:37.627632 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627640 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627871 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.630918 kubelet[2755]: W0912 17:14:37.627882 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.627894 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.630918 kubelet[2755]: E0912 17:14:37.628868 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.631976 containerd[1599]: time="2025-09-12T17:14:37.629811846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w2j9p,Uid:aaaf3aec-275d-4f92-a0c8-5cff3c41a264,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:37.632504 kubelet[2755]: W0912 17:14:37.628882 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.628893 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.629076 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.632504 kubelet[2755]: W0912 17:14:37.629084 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.629092 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.629308 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.632504 kubelet[2755]: W0912 17:14:37.629321 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.629329 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.632504 kubelet[2755]: E0912 17:14:37.629475 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.632504 kubelet[2755]: W0912 17:14:37.629483 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.629490 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.630437 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.633309 kubelet[2755]: W0912 17:14:37.630465 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.630484 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.630711 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.633309 kubelet[2755]: W0912 17:14:37.630721 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.630907 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.631051 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.633309 kubelet[2755]: W0912 17:14:37.631060 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.633309 kubelet[2755]: E0912 17:14:37.631070 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.631699 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.635359 kubelet[2755]: W0912 17:14:37.631713 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.631750 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.632003 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.635359 kubelet[2755]: W0912 17:14:37.632014 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.632033 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.632536 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.635359 kubelet[2755]: W0912 17:14:37.632549 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.632570 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.635359 kubelet[2755]: E0912 17:14:37.633791 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.637087 kubelet[2755]: W0912 17:14:37.633805 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.637087 kubelet[2755]: E0912 17:14:37.633817 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.674354 containerd[1599]: time="2025-09-12T17:14:37.673182974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:37.674354 containerd[1599]: time="2025-09-12T17:14:37.673297989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:37.674354 containerd[1599]: time="2025-09-12T17:14:37.673320831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:37.674354 containerd[1599]: time="2025-09-12T17:14:37.673435326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:37.730021 kubelet[2755]: E0912 17:14:37.729975 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.730021 kubelet[2755]: W0912 17:14:37.730001 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.730021 kubelet[2755]: E0912 17:14:37.730023 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.730213 kubelet[2755]: I0912 17:14:37.730148 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgkj\" (UniqueName: \"kubernetes.io/projected/3eea0a07-67ac-4783-b82b-7eb9a72b754d-kube-api-access-qhgkj\") pod \"csi-node-driver-62n5m\" (UID: \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\") " pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731464 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.732886 kubelet[2755]: W0912 17:14:37.731488 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731507 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731732 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.732886 kubelet[2755]: W0912 17:14:37.731742 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731751 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731952 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.732886 kubelet[2755]: W0912 17:14:37.731961 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.731970 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.732886 kubelet[2755]: E0912 17:14:37.732202 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.733357 kubelet[2755]: W0912 17:14:37.732213 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.733357 kubelet[2755]: E0912 17:14:37.732276 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.733357 kubelet[2755]: E0912 17:14:37.732487 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.733357 kubelet[2755]: W0912 17:14:37.732497 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.733357 kubelet[2755]: E0912 17:14:37.732523 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.733357 kubelet[2755]: I0912 17:14:37.732544 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3eea0a07-67ac-4783-b82b-7eb9a72b754d-varrun\") pod \"csi-node-driver-62n5m\" (UID: \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\") " pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:37.733357 kubelet[2755]: E0912 17:14:37.732909 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.733357 kubelet[2755]: W0912 17:14:37.732921 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.733357 kubelet[2755]: E0912 17:14:37.732987 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.733537 kubelet[2755]: E0912 17:14:37.733158 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.733537 kubelet[2755]: W0912 17:14:37.733166 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.733537 kubelet[2755]: E0912 17:14:37.733174 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.733537 kubelet[2755]: E0912 17:14:37.733385 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.733537 kubelet[2755]: W0912 17:14:37.733474 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.733537 kubelet[2755]: E0912 17:14:37.733489 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.733720 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.737952 kubelet[2755]: W0912 17:14:37.733736 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.733759 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.733963 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.737952 kubelet[2755]: W0912 17:14:37.733973 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.733981 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.734468 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.737952 kubelet[2755]: W0912 17:14:37.734481 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.734492 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.737952 kubelet[2755]: E0912 17:14:37.734697 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739194 kubelet[2755]: W0912 17:14:37.734705 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.734865 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.735771 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739194 kubelet[2755]: W0912 17:14:37.735782 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.735794 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.735958 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739194 kubelet[2755]: W0912 17:14:37.735966 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.735973 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739194 kubelet[2755]: E0912 17:14:37.736286 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739194 kubelet[2755]: W0912 17:14:37.736296 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.736304 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.736470 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739449 kubelet[2755]: W0912 17:14:37.736478 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.736486 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.736774 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739449 kubelet[2755]: W0912 17:14:37.736785 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.736795 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.737025 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.739449 kubelet[2755]: W0912 17:14:37.737034 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.739449 kubelet[2755]: E0912 17:14:37.737042 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.737303 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.740136 kubelet[2755]: W0912 17:14:37.737314 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.737324 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.738290 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.740136 kubelet[2755]: W0912 17:14:37.738303 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.738315 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.738537 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.740136 kubelet[2755]: W0912 17:14:37.738547 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.740136 kubelet[2755]: E0912 17:14:37.738555 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.787463 containerd[1599]: time="2025-09-12T17:14:37.787319432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w2j9p,Uid:aaaf3aec-275d-4f92-a0c8-5cff3c41a264,Namespace:calico-system,Attempt:0,} returns sandbox id \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\"" Sep 12 17:14:37.792635 containerd[1599]: time="2025-09-12T17:14:37.792245170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:14:37.835915 kubelet[2755]: E0912 17:14:37.835878 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.835915 kubelet[2755]: W0912 17:14:37.835948 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.835915 kubelet[2755]: E0912 17:14:37.835974 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.837854 kubelet[2755]: E0912 17:14:37.837451 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.837854 kubelet[2755]: W0912 17:14:37.837472 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.837854 kubelet[2755]: E0912 17:14:37.837503 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.839087 kubelet[2755]: E0912 17:14:37.838788 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.839087 kubelet[2755]: W0912 17:14:37.838806 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.839087 kubelet[2755]: E0912 17:14:37.838918 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.841005 kubelet[2755]: E0912 17:14:37.840963 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.841005 kubelet[2755]: W0912 17:14:37.840987 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.841346 kubelet[2755]: E0912 17:14:37.841018 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.841540 kubelet[2755]: E0912 17:14:37.841450 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.841540 kubelet[2755]: W0912 17:14:37.841467 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.841656 kubelet[2755]: E0912 17:14:37.841626 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.841711 kubelet[2755]: E0912 17:14:37.841678 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.841711 kubelet[2755]: W0912 17:14:37.841687 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.841711 kubelet[2755]: E0912 17:14:37.841704 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.841986 kubelet[2755]: E0912 17:14:37.841971 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.841986 kubelet[2755]: W0912 17:14:37.841986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.842062 kubelet[2755]: E0912 17:14:37.842007 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.842223 kubelet[2755]: E0912 17:14:37.842209 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.842274 kubelet[2755]: W0912 17:14:37.842224 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.842323 kubelet[2755]: E0912 17:14:37.842306 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.843465 kubelet[2755]: E0912 17:14:37.843052 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.843465 kubelet[2755]: W0912 17:14:37.843229 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.843465 kubelet[2755]: E0912 17:14:37.843297 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.844375 kubelet[2755]: E0912 17:14:37.844355 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.844751 kubelet[2755]: W0912 17:14:37.844632 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.845578 kubelet[2755]: E0912 17:14:37.845250 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.849985 kubelet[2755]: E0912 17:14:37.849955 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.850441 kubelet[2755]: W0912 17:14:37.850137 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.850441 kubelet[2755]: E0912 17:14:37.850171 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.868894 kubelet[2755]: E0912 17:14:37.868809 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.869140 kubelet[2755]: W0912 17:14:37.869071 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.869140 kubelet[2755]: E0912 17:14:37.869103 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:37.943349 kubelet[2755]: E0912 17:14:37.940570 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:37.943349 kubelet[2755]: W0912 17:14:37.940605 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:37.943349 kubelet[2755]: E0912 17:14:37.940629 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:38.042757 kubelet[2755]: E0912 17:14:38.042654 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:38.042757 kubelet[2755]: W0912 17:14:38.042687 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:38.042757 kubelet[2755]: E0912 17:14:38.042711 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:38.144598 kubelet[2755]: E0912 17:14:38.144143 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:38.144598 kubelet[2755]: W0912 17:14:38.144186 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:38.144598 kubelet[2755]: E0912 17:14:38.144225 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:38.246090 kubelet[2755]: E0912 17:14:38.245814 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:38.246090 kubelet[2755]: W0912 17:14:38.245865 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:38.246090 kubelet[2755]: E0912 17:14:38.245900 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:38.260070 kubelet[2755]: E0912 17:14:38.260036 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:14:38.260070 kubelet[2755]: W0912 17:14:38.260063 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:14:38.260239 kubelet[2755]: E0912 17:14:38.260089 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:14:38.328354 containerd[1599]: time="2025-09-12T17:14:38.328168700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c994b957f-t2mmf,Uid:011839fb-2096-45c5-adf0-e71ca8be5482,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:38.354661 containerd[1599]: time="2025-09-12T17:14:38.353723877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:38.354661 containerd[1599]: time="2025-09-12T17:14:38.353798046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:38.354661 containerd[1599]: time="2025-09-12T17:14:38.353821009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:38.354661 containerd[1599]: time="2025-09-12T17:14:38.353957546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:38.419708 containerd[1599]: time="2025-09-12T17:14:38.419667274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c994b957f-t2mmf,Uid:011839fb-2096-45c5-adf0-e71ca8be5482,Namespace:calico-system,Attempt:0,} returns sandbox id \"056a891c0dbeae751c44ddceba807e47ec33be38449d50d9046b1a86f68bc9b1\"" Sep 12 17:14:39.216939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229950070.mount: Deactivated successfully. Sep 12 17:14:39.273191 kubelet[2755]: E0912 17:14:39.273122 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:39.311194 containerd[1599]: time="2025-09-12T17:14:39.311130339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.319463 containerd[1599]: time="2025-09-12T17:14:39.319168248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 12 17:14:39.319463 containerd[1599]: time="2025-09-12T17:14:39.319315866Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.328443 containerd[1599]: time="2025-09-12T17:14:39.327566321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.330980 containerd[1599]: time="2025-09-12T17:14:39.329727587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.537437771s" Sep 12 17:14:39.330980 containerd[1599]: time="2025-09-12T17:14:39.329779834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:14:39.332317 containerd[1599]: time="2025-09-12T17:14:39.331749996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:14:39.333101 containerd[1599]: time="2025-09-12T17:14:39.333058037Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:14:39.357278 containerd[1599]: time="2025-09-12T17:14:39.357232932Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d\"" Sep 12 17:14:39.362659 containerd[1599]: time="2025-09-12T17:14:39.359641789Z" level=info msg="StartContainer for \"53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d\"" Sep 12 17:14:39.434773 containerd[1599]: time="2025-09-12T17:14:39.434021622Z" level=info msg="StartContainer for \"53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d\" returns successfully" Sep 12 17:14:39.482192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d-rootfs.mount: Deactivated successfully. Sep 12 17:14:39.525730 containerd[1599]: time="2025-09-12T17:14:39.525591771Z" level=info msg="shim disconnected" id=53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d namespace=k8s.io Sep 12 17:14:39.525730 containerd[1599]: time="2025-09-12T17:14:39.525676781Z" level=warning msg="cleaning up after shim disconnected" id=53a589842b0ab41cbdc2238e3f4fc92647258df5fa6896841d3373fec95efd3d namespace=k8s.io Sep 12 17:14:39.525730 containerd[1599]: time="2025-09-12T17:14:39.525724187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:41.272663 kubelet[2755]: E0912 17:14:41.272597 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:41.279237 containerd[1599]: time="2025-09-12T17:14:41.278824262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.280657 containerd[1599]: time="2025-09-12T17:14:41.280600757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 12 17:14:41.281106 containerd[1599]: time="2025-09-12T17:14:41.281074494Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.283345 containerd[1599]: time="2025-09-12T17:14:41.283303683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.284384 containerd[1599]: time="2025-09-12T17:14:41.284053734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.952258613s" Sep 12 17:14:41.284384 containerd[1599]: time="2025-09-12T17:14:41.284086658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:14:41.286085 containerd[1599]: time="2025-09-12T17:14:41.285847431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:14:41.301399 containerd[1599]: time="2025-09-12T17:14:41.300873326Z" level=info msg="CreateContainer within sandbox \"056a891c0dbeae751c44ddceba807e47ec33be38449d50d9046b1a86f68bc9b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:14:41.322755 containerd[1599]: time="2025-09-12T17:14:41.321902787Z" level=info msg="CreateContainer within sandbox \"056a891c0dbeae751c44ddceba807e47ec33be38449d50d9046b1a86f68bc9b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fd6c7d150048aed1277f8cb5e5ca7532ee61facbd12993bf8da6f1e12967ab3e\"" Sep 12 17:14:41.322478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718265046.mount: Deactivated successfully. Sep 12 17:14:41.327117 containerd[1599]: time="2025-09-12T17:14:41.325185703Z" level=info msg="StartContainer for \"fd6c7d150048aed1277f8cb5e5ca7532ee61facbd12993bf8da6f1e12967ab3e\"" Sep 12 17:14:41.408720 containerd[1599]: time="2025-09-12T17:14:41.408663029Z" level=info msg="StartContainer for \"fd6c7d150048aed1277f8cb5e5ca7532ee61facbd12993bf8da6f1e12967ab3e\" returns successfully" Sep 12 17:14:42.429380 kubelet[2755]: I0912 17:14:42.429241 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c994b957f-t2mmf" podStartSLOduration=2.565146792 podStartE2EDuration="5.429210009s" podCreationTimestamp="2025-09-12 17:14:37 +0000 UTC" firstStartedPulling="2025-09-12 17:14:38.42100568 +0000 UTC m=+25.301813206" lastFinishedPulling="2025-09-12 17:14:41.285068897 +0000 UTC m=+28.165876423" observedRunningTime="2025-09-12 17:14:42.429099116 +0000 UTC m=+29.309906682" watchObservedRunningTime="2025-09-12 17:14:42.429210009 +0000 UTC m=+29.310017535" Sep 12 17:14:43.274110 kubelet[2755]: E0912 17:14:43.274019 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:43.410989 kubelet[2755]: I0912 17:14:43.410783 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:43.755501 containerd[1599]: time="2025-09-12T17:14:43.754805727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:43.755501 containerd[1599]: time="2025-09-12T17:14:43.755455947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:14:43.756746 containerd[1599]: time="2025-09-12T17:14:43.756703024Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:43.760132 containerd[1599]: time="2025-09-12T17:14:43.760078899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:43.761219 containerd[1599]: time="2025-09-12T17:14:43.761183842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.475298847s" Sep 12 17:14:43.761582 containerd[1599]: time="2025-09-12T17:14:43.761367299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:14:43.765098 containerd[1599]: time="2025-09-12T17:14:43.765048203Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:14:43.783816 containerd[1599]: time="2025-09-12T17:14:43.783769070Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8\"" Sep 12 17:14:43.784687 containerd[1599]: time="2025-09-12T17:14:43.784660074Z" level=info msg="StartContainer for \"b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8\"" Sep 12 17:14:43.856659 containerd[1599]: time="2025-09-12T17:14:43.856573027Z" level=info msg="StartContainer for \"b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8\" returns successfully" Sep 12 17:14:44.402908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8-rootfs.mount: Deactivated successfully. Sep 12 17:14:44.407253 kubelet[2755]: I0912 17:14:44.406621 2755 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:14:44.448200 containerd[1599]: time="2025-09-12T17:14:44.446784923Z" level=info msg="shim disconnected" id=b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8 namespace=k8s.io Sep 12 17:14:44.448200 containerd[1599]: time="2025-09-12T17:14:44.447313057Z" level=warning msg="cleaning up after shim disconnected" id=b2f158cfb8c9f688d0d67591ea660b0f362160efb609b57f22eeb938bedbefc8 namespace=k8s.io Sep 12 17:14:44.448200 containerd[1599]: time="2025-09-12T17:14:44.447330535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:44.495981 kubelet[2755]: I0912 17:14:44.492505 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92c5p\" (UniqueName: \"kubernetes.io/projected/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-kube-api-access-92c5p\") pod \"whisker-75d9845454-rbgv5\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " pod="calico-system/whisker-75d9845454-rbgv5" Sep 12 17:14:44.495981 kubelet[2755]: I0912 17:14:44.492548 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c3998a3-0465-446a-b912-478d1f626fde-config-volume\") pod \"coredns-7c65d6cfc9-spslt\" (UID: \"0c3998a3-0465-446a-b912-478d1f626fde\") " pod="kube-system/coredns-7c65d6cfc9-spslt" Sep 12 17:14:44.495981 kubelet[2755]: I0912 17:14:44.492568 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4zg\" (UniqueName: \"kubernetes.io/projected/0c3998a3-0465-446a-b912-478d1f626fde-kube-api-access-zh4zg\") pod \"coredns-7c65d6cfc9-spslt\" (UID: \"0c3998a3-0465-446a-b912-478d1f626fde\") " pod="kube-system/coredns-7c65d6cfc9-spslt" Sep 12 17:14:44.495981 kubelet[2755]: I0912 17:14:44.492589 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-backend-key-pair\") pod \"whisker-75d9845454-rbgv5\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " pod="calico-system/whisker-75d9845454-rbgv5" Sep 12 17:14:44.495981 kubelet[2755]: I0912 17:14:44.492607 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26a636e5-5783-4e4c-947c-929b9b44edd2-config-volume\") pod \"coredns-7c65d6cfc9-mdw77\" (UID: \"26a636e5-5783-4e4c-947c-929b9b44edd2\") " pod="kube-system/coredns-7c65d6cfc9-mdw77" Sep 12 17:14:44.496275 kubelet[2755]: I0912 17:14:44.492624 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0-goldmane-ca-bundle\") pod \"goldmane-7988f88666-q6k4l\" (UID: \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\") " pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:44.496275 kubelet[2755]: I0912 17:14:44.492641 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-ca-bundle\") pod \"whisker-75d9845454-rbgv5\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " pod="calico-system/whisker-75d9845454-rbgv5" Sep 12 17:14:44.496275 kubelet[2755]: I0912 17:14:44.492661 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dg6k\" (UniqueName: \"kubernetes.io/projected/924e4076-11ef-4908-9f70-45486e09017c-kube-api-access-6dg6k\") pod \"calico-kube-controllers-7f64c47db9-x5xkm\" (UID: \"924e4076-11ef-4908-9f70-45486e09017c\") " pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" Sep 12 17:14:44.496275 kubelet[2755]: I0912 17:14:44.492693 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2q9x\" (UniqueName: \"kubernetes.io/projected/26a636e5-5783-4e4c-947c-929b9b44edd2-kube-api-access-c2q9x\") pod \"coredns-7c65d6cfc9-mdw77\" (UID: \"26a636e5-5783-4e4c-947c-929b9b44edd2\") " pod="kube-system/coredns-7c65d6cfc9-mdw77" Sep 12 17:14:44.496275 kubelet[2755]: I0912 17:14:44.492709 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924e4076-11ef-4908-9f70-45486e09017c-tigera-ca-bundle\") pod \"calico-kube-controllers-7f64c47db9-x5xkm\" (UID: \"924e4076-11ef-4908-9f70-45486e09017c\") " pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" Sep 12 17:14:44.496390 kubelet[2755]: I0912 17:14:44.492726 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0-config\") pod \"goldmane-7988f88666-q6k4l\" (UID: \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\") " pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:44.496390 kubelet[2755]: I0912 17:14:44.492743 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0-goldmane-key-pair\") pod \"goldmane-7988f88666-q6k4l\" (UID: \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\") " pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:44.496390 kubelet[2755]: I0912 17:14:44.492758 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqwhn\" (UniqueName: \"kubernetes.io/projected/64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0-kube-api-access-qqwhn\") pod \"goldmane-7988f88666-q6k4l\" (UID: \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\") " pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:44.593158 kubelet[2755]: I0912 17:14:44.593029 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd730022-9476-4c1a-8998-1f184b2b1808-calico-apiserver-certs\") pod \"calico-apiserver-855468fc96-dmsh8\" (UID: \"dd730022-9476-4c1a-8998-1f184b2b1808\") " pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" Sep 12 17:14:44.593158 kubelet[2755]: I0912 17:14:44.593115 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8v28\" (UniqueName: \"kubernetes.io/projected/dd730022-9476-4c1a-8998-1f184b2b1808-kube-api-access-x8v28\") pod \"calico-apiserver-855468fc96-dmsh8\" (UID: \"dd730022-9476-4c1a-8998-1f184b2b1808\") " pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" Sep 12 17:14:44.595648 kubelet[2755]: I0912 17:14:44.593225 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f9d17caf-52ed-421a-8db2-79e3916a6185-calico-apiserver-certs\") pod \"calico-apiserver-855468fc96-2tm6q\" (UID: \"f9d17caf-52ed-421a-8db2-79e3916a6185\") " pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" Sep 12 17:14:44.595648 kubelet[2755]: I0912 17:14:44.593325 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsqz\" (UniqueName: \"kubernetes.io/projected/f9d17caf-52ed-421a-8db2-79e3916a6185-kube-api-access-tlsqz\") pod \"calico-apiserver-855468fc96-2tm6q\" (UID: \"f9d17caf-52ed-421a-8db2-79e3916a6185\") " pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" Sep 12 17:14:44.794320 containerd[1599]: time="2025-09-12T17:14:44.793983922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-spslt,Uid:0c3998a3-0465-446a-b912-478d1f626fde,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:44.795542 containerd[1599]: time="2025-09-12T17:14:44.795371909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f64c47db9-x5xkm,Uid:924e4076-11ef-4908-9f70-45486e09017c,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:44.796186 containerd[1599]: time="2025-09-12T17:14:44.796155371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6k4l,Uid:64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:44.799458 containerd[1599]: time="2025-09-12T17:14:44.799408205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d9845454-rbgv5,Uid:6dec4ba3-6627-4db6-bbf3-34ddc1d002e0,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:44.809795 containerd[1599]: time="2025-09-12T17:14:44.809730036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdw77,Uid:26a636e5-5783-4e4c-947c-929b9b44edd2,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:44.836494 containerd[1599]: time="2025-09-12T17:14:44.836032311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-dmsh8,Uid:dd730022-9476-4c1a-8998-1f184b2b1808,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:14:44.836494 containerd[1599]: time="2025-09-12T17:14:44.836361470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-2tm6q,Uid:f9d17caf-52ed-421a-8db2-79e3916a6185,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:14:45.080593 containerd[1599]: time="2025-09-12T17:14:45.080412818Z" level=error msg="Failed to destroy network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.082009 containerd[1599]: time="2025-09-12T17:14:45.081943037Z" level=error msg="encountered an error cleaning up failed sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.082137 containerd[1599]: time="2025-09-12T17:14:45.082032866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f64c47db9-x5xkm,Uid:924e4076-11ef-4908-9f70-45486e09017c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.082863 containerd[1599]: time="2025-09-12T17:14:45.082198167Z" level=error msg="Failed to destroy network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.084355 containerd[1599]: time="2025-09-12T17:14:45.084287159Z" level=error msg="encountered an error cleaning up failed sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.084445 containerd[1599]: time="2025-09-12T17:14:45.084377989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-spslt,Uid:0c3998a3-0465-446a-b912-478d1f626fde,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.084668 kubelet[2755]: E0912 17:14:45.084616 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.084922 kubelet[2755]: E0912 17:14:45.084902 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-spslt" Sep 12 17:14:45.085027 kubelet[2755]: E0912 17:14:45.085011 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-spslt" Sep 12 17:14:45.086903 kubelet[2755]: E0912 17:14:45.085127 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-spslt_kube-system(0c3998a3-0465-446a-b912-478d1f626fde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-spslt_kube-system(0c3998a3-0465-446a-b912-478d1f626fde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-spslt" podUID="0c3998a3-0465-446a-b912-478d1f626fde" Sep 12 17:14:45.086903 kubelet[2755]: E0912 17:14:45.084791 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.086903 kubelet[2755]: E0912 17:14:45.085983 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" Sep 12 17:14:45.087099 kubelet[2755]: E0912 17:14:45.086003 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" Sep 12 17:14:45.087099 kubelet[2755]: E0912 17:14:45.086036 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f64c47db9-x5xkm_calico-system(924e4076-11ef-4908-9f70-45486e09017c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f64c47db9-x5xkm_calico-system(924e4076-11ef-4908-9f70-45486e09017c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" podUID="924e4076-11ef-4908-9f70-45486e09017c" Sep 12 17:14:45.087191 containerd[1599]: time="2025-09-12T17:14:45.086877212Z" level=error msg="Failed to destroy network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.088307 containerd[1599]: time="2025-09-12T17:14:45.088259329Z" level=error msg="encountered an error cleaning up failed sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.088386 containerd[1599]: time="2025-09-12T17:14:45.088338559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d9845454-rbgv5,Uid:6dec4ba3-6627-4db6-bbf3-34ddc1d002e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.088779 kubelet[2755]: E0912 17:14:45.088579 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.088779 kubelet[2755]: E0912 17:14:45.088628 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d9845454-rbgv5" Sep 12 17:14:45.088779 kubelet[2755]: E0912 17:14:45.088647 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d9845454-rbgv5" Sep 12 17:14:45.088910 kubelet[2755]: E0912 17:14:45.088685 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75d9845454-rbgv5_calico-system(6dec4ba3-6627-4db6-bbf3-34ddc1d002e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75d9845454-rbgv5_calico-system(6dec4ba3-6627-4db6-bbf3-34ddc1d002e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d9845454-rbgv5" podUID="6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" Sep 12 17:14:45.111462 containerd[1599]: time="2025-09-12T17:14:45.111032270Z" level=error msg="Failed to destroy network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.111636 containerd[1599]: time="2025-09-12T17:14:45.111591843Z" level=error msg="encountered an error cleaning up failed sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.111704 containerd[1599]: time="2025-09-12T17:14:45.111667114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-dmsh8,Uid:dd730022-9476-4c1a-8998-1f184b2b1808,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.113273 kubelet[2755]: E0912 17:14:45.112071 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.113273 kubelet[2755]: E0912 17:14:45.112143 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" Sep 12 17:14:45.113273 kubelet[2755]: E0912 17:14:45.112163 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" Sep 12 17:14:45.113484 kubelet[2755]: E0912 17:14:45.112212 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855468fc96-dmsh8_calico-apiserver(dd730022-9476-4c1a-8998-1f184b2b1808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855468fc96-dmsh8_calico-apiserver(dd730022-9476-4c1a-8998-1f184b2b1808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" podUID="dd730022-9476-4c1a-8998-1f184b2b1808" Sep 12 17:14:45.118387 containerd[1599]: time="2025-09-12T17:14:45.118119390Z" level=error msg="Failed to destroy network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.118708 containerd[1599]: time="2025-09-12T17:14:45.118668805Z" level=error msg="encountered an error cleaning up failed sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.118766 containerd[1599]: time="2025-09-12T17:14:45.118722558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6k4l,Uid:64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.119355 kubelet[2755]: E0912 17:14:45.119044 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.119355 kubelet[2755]: E0912 17:14:45.119107 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:45.119355 kubelet[2755]: E0912 17:14:45.119128 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q6k4l" Sep 12 17:14:45.120999 kubelet[2755]: E0912 17:14:45.119171 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-q6k4l_calico-system(64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-q6k4l_calico-system(64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q6k4l" podUID="64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0" Sep 12 17:14:45.130591 containerd[1599]: time="2025-09-12T17:14:45.130519000Z" level=error msg="Failed to destroy network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.131720 containerd[1599]: time="2025-09-12T17:14:45.131330184Z" level=error msg="encountered an error cleaning up failed sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.132039 containerd[1599]: time="2025-09-12T17:14:45.131738736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-2tm6q,Uid:f9d17caf-52ed-421a-8db2-79e3916a6185,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.133644 kubelet[2755]: E0912 17:14:45.132262 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.133644 kubelet[2755]: E0912 17:14:45.132330 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" Sep 12 17:14:45.133644 kubelet[2755]: E0912 17:14:45.132356 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" Sep 12 17:14:45.133862 kubelet[2755]: E0912 17:14:45.132412 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855468fc96-2tm6q_calico-apiserver(f9d17caf-52ed-421a-8db2-79e3916a6185)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855468fc96-2tm6q_calico-apiserver(f9d17caf-52ed-421a-8db2-79e3916a6185)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" podUID="f9d17caf-52ed-421a-8db2-79e3916a6185" Sep 12 17:14:45.138287 containerd[1599]: time="2025-09-12T17:14:45.138233166Z" level=error msg="Failed to destroy network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.138651 containerd[1599]: time="2025-09-12T17:14:45.138600442Z" level=error msg="encountered an error cleaning up failed sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.138708 containerd[1599]: time="2025-09-12T17:14:45.138665635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdw77,Uid:26a636e5-5783-4e4c-947c-929b9b44edd2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.139194 kubelet[2755]: E0912 17:14:45.138979 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.139194 kubelet[2755]: E0912 17:14:45.139041 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mdw77" Sep 12 17:14:45.139194 kubelet[2755]: E0912 17:14:45.139059 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mdw77" Sep 12 17:14:45.139337 kubelet[2755]: E0912 17:14:45.139107 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mdw77_kube-system(26a636e5-5783-4e4c-947c-929b9b44edd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mdw77_kube-system(26a636e5-5783-4e4c-947c-929b9b44edd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mdw77" podUID="26a636e5-5783-4e4c-947c-929b9b44edd2" Sep 12 17:14:45.279187 containerd[1599]: time="2025-09-12T17:14:45.279124989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62n5m,Uid:3eea0a07-67ac-4783-b82b-7eb9a72b754d,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:45.345018 containerd[1599]: time="2025-09-12T17:14:45.344804125Z" level=error msg="Failed to destroy network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.347712 containerd[1599]: time="2025-09-12T17:14:45.347325066Z" level=error msg="encountered an error cleaning up failed sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.347712 containerd[1599]: time="2025-09-12T17:14:45.347415495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62n5m,Uid:3eea0a07-67ac-4783-b82b-7eb9a72b754d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.347948 kubelet[2755]: E0912 17:14:45.347790 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.348009 kubelet[2755]: E0912 17:14:45.347940 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:45.348009 kubelet[2755]: E0912 17:14:45.347985 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-62n5m" Sep 12 17:14:45.348093 kubelet[2755]: E0912 17:14:45.348049 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-62n5m_calico-system(3eea0a07-67ac-4783-b82b-7eb9a72b754d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-62n5m_calico-system(3eea0a07-67ac-4783-b82b-7eb9a72b754d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:45.429666 kubelet[2755]: I0912 17:14:45.427347 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:45.430124 containerd[1599]: time="2025-09-12T17:14:45.429067178Z" level=info msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" Sep 12 17:14:45.430124 containerd[1599]: time="2025-09-12T17:14:45.429284153Z" level=info msg="Ensure that sandbox e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09 in task-service has been cleanup successfully" Sep 12 17:14:45.435541 kubelet[2755]: I0912 17:14:45.435095 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:14:45.436125 containerd[1599]: time="2025-09-12T17:14:45.436091506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:14:45.437995 containerd[1599]: time="2025-09-12T17:14:45.437774187Z" level=info msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" Sep 12 17:14:45.438352 containerd[1599]: time="2025-09-12T17:14:45.438323721Z" level=info msg="Ensure that sandbox 4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7 in task-service has been cleanup successfully" Sep 12 17:14:45.449652 kubelet[2755]: I0912 17:14:45.449627 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:45.453856 containerd[1599]: time="2025-09-12T17:14:45.452949788Z" level=info msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" Sep 12 17:14:45.455377 containerd[1599]: time="2025-09-12T17:14:45.455237917Z" level=info msg="Ensure that sandbox 308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8 in task-service has been cleanup successfully" Sep 12 17:14:45.459778 kubelet[2755]: I0912 17:14:45.459450 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:45.463225 containerd[1599]: time="2025-09-12T17:14:45.463027474Z" level=info msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" Sep 12 17:14:45.464045 containerd[1599]: time="2025-09-12T17:14:45.463794783Z" level=info msg="Ensure that sandbox bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332 in task-service has been cleanup successfully" Sep 12 17:14:45.469373 kubelet[2755]: I0912 17:14:45.468980 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:45.471845 containerd[1599]: time="2025-09-12T17:14:45.471783476Z" level=info msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" Sep 12 17:14:45.472568 containerd[1599]: time="2025-09-12T17:14:45.472347289Z" level=info msg="Ensure that sandbox bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c in task-service has been cleanup successfully" Sep 12 17:14:45.478714 kubelet[2755]: I0912 17:14:45.478680 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:45.482313 containerd[1599]: time="2025-09-12T17:14:45.480752973Z" level=info msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" Sep 12 17:14:45.482313 containerd[1599]: time="2025-09-12T17:14:45.482029022Z" level=info msg="Ensure that sandbox ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52 in task-service has been cleanup successfully" Sep 12 17:14:45.486242 kubelet[2755]: I0912 17:14:45.486203 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:45.489051 kubelet[2755]: I0912 17:14:45.489017 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:14:45.490286 containerd[1599]: time="2025-09-12T17:14:45.489973920Z" level=info msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" Sep 12 17:14:45.492287 containerd[1599]: time="2025-09-12T17:14:45.492024277Z" level=info msg="Ensure that sandbox 200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e in task-service has been cleanup successfully" Sep 12 17:14:45.494666 containerd[1599]: time="2025-09-12T17:14:45.494541179Z" level=info msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" Sep 12 17:14:45.495638 containerd[1599]: time="2025-09-12T17:14:45.495428834Z" level=info msg="Ensure that sandbox 915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135 in task-service has been cleanup successfully" Sep 12 17:14:45.549742 containerd[1599]: time="2025-09-12T17:14:45.549651928Z" level=error msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" failed" error="failed to destroy network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.550171 kubelet[2755]: E0912 17:14:45.550049 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:14:45.550323 kubelet[2755]: E0912 17:14:45.550228 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7"} Sep 12 17:14:45.550363 kubelet[2755]: E0912 17:14:45.550321 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9d17caf-52ed-421a-8db2-79e3916a6185\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.550525 kubelet[2755]: E0912 17:14:45.550356 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9d17caf-52ed-421a-8db2-79e3916a6185\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" podUID="f9d17caf-52ed-421a-8db2-79e3916a6185" Sep 12 17:14:45.557057 containerd[1599]: time="2025-09-12T17:14:45.556804040Z" level=error msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" failed" error="failed to destroy network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.557212 kubelet[2755]: E0912 17:14:45.557092 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:45.557212 kubelet[2755]: E0912 17:14:45.557159 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09"} Sep 12 17:14:45.557212 kubelet[2755]: E0912 17:14:45.557202 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"924e4076-11ef-4908-9f70-45486e09017c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.557330 kubelet[2755]: E0912 17:14:45.557227 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"924e4076-11ef-4908-9f70-45486e09017c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" podUID="924e4076-11ef-4908-9f70-45486e09017c" Sep 12 17:14:45.572543 containerd[1599]: time="2025-09-12T17:14:45.572078630Z" level=error msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" failed" error="failed to destroy network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.572675 kubelet[2755]: E0912 17:14:45.572596 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:45.572971 kubelet[2755]: E0912 17:14:45.572671 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e"} Sep 12 17:14:45.572971 kubelet[2755]: E0912 17:14:45.572745 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.572971 kubelet[2755]: E0912 17:14:45.572768 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q6k4l" podUID="64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0" Sep 12 17:14:45.574846 containerd[1599]: time="2025-09-12T17:14:45.573917292Z" level=error msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" failed" error="failed to destroy network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.574989 kubelet[2755]: E0912 17:14:45.574165 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:45.574989 kubelet[2755]: E0912 17:14:45.574222 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8"} Sep 12 17:14:45.574989 kubelet[2755]: E0912 17:14:45.574259 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.574989 kubelet[2755]: E0912 17:14:45.574282 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d9845454-rbgv5" podUID="6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" Sep 12 17:14:45.597541 containerd[1599]: time="2025-09-12T17:14:45.597407068Z" level=error msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" failed" error="failed to destroy network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.600657 kubelet[2755]: E0912 17:14:45.600602 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:14:45.601033 kubelet[2755]: E0912 17:14:45.600921 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135"} Sep 12 17:14:45.601033 kubelet[2755]: E0912 17:14:45.600966 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c3998a3-0465-446a-b912-478d1f626fde\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.601033 kubelet[2755]: E0912 17:14:45.600989 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c3998a3-0465-446a-b912-478d1f626fde\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-spslt" podUID="0c3998a3-0465-446a-b912-478d1f626fde" Sep 12 17:14:45.603089 containerd[1599]: time="2025-09-12T17:14:45.602672764Z" level=error msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" failed" error="failed to destroy network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.603192 kubelet[2755]: E0912 17:14:45.602938 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:45.603192 kubelet[2755]: E0912 17:14:45.602979 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c"} Sep 12 17:14:45.603192 kubelet[2755]: E0912 17:14:45.603012 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd730022-9476-4c1a-8998-1f184b2b1808\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.603192 kubelet[2755]: E0912 17:14:45.603034 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd730022-9476-4c1a-8998-1f184b2b1808\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" podUID="dd730022-9476-4c1a-8998-1f184b2b1808" Sep 12 17:14:45.603963 containerd[1599]: time="2025-09-12T17:14:45.603785592Z" level=error msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" failed" error="failed to destroy network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.604070 kubelet[2755]: E0912 17:14:45.604037 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:45.604111 kubelet[2755]: E0912 17:14:45.604076 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332"} Sep 12 17:14:45.604143 kubelet[2755]: E0912 17:14:45.604104 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.604143 kubelet[2755]: E0912 17:14:45.604127 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3eea0a07-67ac-4783-b82b-7eb9a72b754d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-62n5m" podUID="3eea0a07-67ac-4783-b82b-7eb9a72b754d" Sep 12 17:14:45.609168 containerd[1599]: time="2025-09-12T17:14:45.609119960Z" level=error msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" failed" error="failed to destroy network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:14:45.609691 kubelet[2755]: E0912 17:14:45.609590 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:45.609765 kubelet[2755]: E0912 17:14:45.609695 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52"} Sep 12 17:14:45.609765 kubelet[2755]: E0912 17:14:45.609746 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26a636e5-5783-4e4c-947c-929b9b44edd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:14:45.609927 kubelet[2755]: E0912 17:14:45.609778 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26a636e5-5783-4e4c-947c-929b9b44edd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mdw77" podUID="26a636e5-5783-4e4c-947c-929b9b44edd2" Sep 12 17:14:45.780813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09-shm.mount: Deactivated successfully. Sep 12 17:14:45.781000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135-shm.mount: Deactivated successfully. Sep 12 17:14:47.656375 kubelet[2755]: I0912 17:14:47.656221 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:49.945721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114795109.mount: Deactivated successfully. Sep 12 17:14:49.983042 containerd[1599]: time="2025-09-12T17:14:49.982416469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:49.984265 containerd[1599]: time="2025-09-12T17:14:49.983906248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:14:49.992865 containerd[1599]: time="2025-09-12T17:14:49.992675255Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:49.994146 containerd[1599]: time="2025-09-12T17:14:49.994101240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.557960299s" Sep 12 17:14:49.994146 containerd[1599]: time="2025-09-12T17:14:49.994142996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:14:49.994571 containerd[1599]: time="2025-09-12T17:14:49.994533758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:50.018748 containerd[1599]: time="2025-09-12T17:14:50.018592725Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:14:50.044130 containerd[1599]: time="2025-09-12T17:14:50.044075604Z" level=info msg="CreateContainer within sandbox \"80b2999131171a1a615e38179aa375886b1d2d45c85d6ec6493e9cd6a409ae01\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"31990c4c3b660700a5e0bfd43c43cd45b56a5bf7bbcb00565fe39143aa294d5b\"" Sep 12 17:14:50.046448 containerd[1599]: time="2025-09-12T17:14:50.044812658Z" level=info msg="StartContainer for \"31990c4c3b660700a5e0bfd43c43cd45b56a5bf7bbcb00565fe39143aa294d5b\"" Sep 12 17:14:50.120782 containerd[1599]: time="2025-09-12T17:14:50.120730143Z" level=info msg="StartContainer for \"31990c4c3b660700a5e0bfd43c43cd45b56a5bf7bbcb00565fe39143aa294d5b\" returns successfully" Sep 12 17:14:50.272877 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:14:50.273002 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:14:50.436068 containerd[1599]: time="2025-09-12T17:14:50.434380147Z" level=info msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" Sep 12 17:14:50.597564 kubelet[2755]: I0912 17:14:50.597501 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w2j9p" podStartSLOduration=1.393466772 podStartE2EDuration="13.597482827s" podCreationTimestamp="2025-09-12 17:14:37 +0000 UTC" firstStartedPulling="2025-09-12 17:14:37.791741387 +0000 UTC m=+24.672548913" lastFinishedPulling="2025-09-12 17:14:49.995757442 +0000 UTC m=+36.876564968" observedRunningTime="2025-09-12 17:14:50.55343225 +0000 UTC m=+37.434239816" watchObservedRunningTime="2025-09-12 17:14:50.597482827 +0000 UTC m=+37.478290353" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.602 [INFO][3924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.603 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" iface="eth0" netns="/var/run/netns/cni-b7012aa3-c0d3-da40-1515-c1327fa59a8b" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.603 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" iface="eth0" netns="/var/run/netns/cni-b7012aa3-c0d3-da40-1515-c1327fa59a8b" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.603 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" iface="eth0" netns="/var/run/netns/cni-b7012aa3-c0d3-da40-1515-c1327fa59a8b" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.603 [INFO][3924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.603 [INFO][3924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.724 [INFO][3935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.725 [INFO][3935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.725 [INFO][3935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.748 [WARNING][3935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.748 [INFO][3935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.755 [INFO][3935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:50.763118 containerd[1599]: 2025-09-12 17:14:50.759 [INFO][3924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:14:50.763118 containerd[1599]: time="2025-09-12T17:14:50.762906620Z" level=info msg="TearDown network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" successfully" Sep 12 17:14:50.763118 containerd[1599]: time="2025-09-12T17:14:50.762937537Z" level=info msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" returns successfully" Sep 12 17:14:50.944978 kubelet[2755]: I0912 17:14:50.942631 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-backend-key-pair\") pod \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " Sep 12 17:14:50.944978 kubelet[2755]: I0912 17:14:50.942744 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-ca-bundle\") pod \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " Sep 12 17:14:50.944978 kubelet[2755]: I0912 17:14:50.942814 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92c5p\" (UniqueName: \"kubernetes.io/projected/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-kube-api-access-92c5p\") pod \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\" (UID: \"6dec4ba3-6627-4db6-bbf3-34ddc1d002e0\") " Sep 12 17:14:50.949229 kubelet[2755]: I0912 17:14:50.946810 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" (UID: "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:14:50.949814 systemd[1]: run-netns-cni\x2db7012aa3\x2dc0d3\x2dda40\x2d1515\x2dc1327fa59a8b.mount: Deactivated successfully. Sep 12 17:14:50.955936 kubelet[2755]: I0912 17:14:50.954099 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-kube-api-access-92c5p" (OuterVolumeSpecName: "kube-api-access-92c5p") pod "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" (UID: "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0"). InnerVolumeSpecName "kube-api-access-92c5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:14:50.961353 systemd[1]: var-lib-kubelet-pods-6dec4ba3\x2d6627\x2d4db6\x2dbbf3\x2d34ddc1d002e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d92c5p.mount: Deactivated successfully. Sep 12 17:14:50.962792 kubelet[2755]: I0912 17:14:50.962713 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" (UID: "6dec4ba3-6627-4db6-bbf3-34ddc1d002e0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:14:50.967008 systemd[1]: var-lib-kubelet-pods-6dec4ba3\x2d6627\x2d4db6\x2dbbf3\x2d34ddc1d002e0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:14:51.044121 kubelet[2755]: I0912 17:14:51.043928 2755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92c5p\" (UniqueName: \"kubernetes.io/projected/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-kube-api-access-92c5p\") on node \"ci-4081-3-6-0-ae88ce84d6\" DevicePath \"\"" Sep 12 17:14:51.044121 kubelet[2755]: I0912 17:14:51.043967 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-backend-key-pair\") on node \"ci-4081-3-6-0-ae88ce84d6\" DevicePath \"\"" Sep 12 17:14:51.044121 kubelet[2755]: I0912 17:14:51.043979 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0-whisker-ca-bundle\") on node \"ci-4081-3-6-0-ae88ce84d6\" DevicePath \"\"" Sep 12 17:14:51.518494 kubelet[2755]: I0912 17:14:51.516825 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:51.751346 kubelet[2755]: I0912 17:14:51.751264 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ddt\" (UniqueName: \"kubernetes.io/projected/021e5f62-a2d9-4eaa-9de5-a11c3c1834c0-kube-api-access-z8ddt\") pod \"whisker-574d6dd485-jzgw2\" (UID: \"021e5f62-a2d9-4eaa-9de5-a11c3c1834c0\") " pod="calico-system/whisker-574d6dd485-jzgw2" Sep 12 17:14:51.752081 kubelet[2755]: I0912 17:14:51.751991 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/021e5f62-a2d9-4eaa-9de5-a11c3c1834c0-whisker-backend-key-pair\") pod \"whisker-574d6dd485-jzgw2\" (UID: \"021e5f62-a2d9-4eaa-9de5-a11c3c1834c0\") " pod="calico-system/whisker-574d6dd485-jzgw2" Sep 12 17:14:51.752176 kubelet[2755]: I0912 17:14:51.752094 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/021e5f62-a2d9-4eaa-9de5-a11c3c1834c0-whisker-ca-bundle\") pod \"whisker-574d6dd485-jzgw2\" (UID: \"021e5f62-a2d9-4eaa-9de5-a11c3c1834c0\") " pod="calico-system/whisker-574d6dd485-jzgw2" Sep 12 17:14:51.901340 containerd[1599]: time="2025-09-12T17:14:51.901246056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574d6dd485-jzgw2,Uid:021e5f62-a2d9-4eaa-9de5-a11c3c1834c0,Namespace:calico-system,Attempt:0,}" Sep 12 17:14:52.262858 systemd-networkd[1243]: calia5d46838471: Link UP Sep 12 17:14:52.265115 systemd-networkd[1243]: calia5d46838471: Gained carrier Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.006 [INFO][3970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.037 [INFO][3970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0 whisker-574d6dd485- calico-system 021e5f62-a2d9-4eaa-9de5-a11c3c1834c0 875 0 2025-09-12 17:14:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:574d6dd485 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 whisker-574d6dd485-jzgw2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5d46838471 [] [] }} ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.038 [INFO][3970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.138 [INFO][4047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" HandleID="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.139 [INFO][4047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" HandleID="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"whisker-574d6dd485-jzgw2", "timestamp":"2025-09-12 17:14:52.138912418 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.139 [INFO][4047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.139 [INFO][4047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.139 [INFO][4047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.160 [INFO][4047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.180 [INFO][4047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.190 [INFO][4047] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.194 [INFO][4047] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.199 [INFO][4047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.199 [INFO][4047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.205 [INFO][4047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.223 [INFO][4047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.233 [INFO][4047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.129/26] block=192.168.9.128/26 handle="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.234 [INFO][4047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.129/26] handle="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.234 [INFO][4047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:52.318649 containerd[1599]: 2025-09-12 17:14:52.234 [INFO][4047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.129/26] IPv6=[] ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" HandleID="k8s-pod-network.75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.238 [INFO][3970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0", GenerateName:"whisker-574d6dd485-", Namespace:"calico-system", SelfLink:"", UID:"021e5f62-a2d9-4eaa-9de5-a11c3c1834c0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574d6dd485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"whisker-574d6dd485-jzgw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5d46838471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.239 [INFO][3970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.129/32] ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.239 [INFO][3970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5d46838471 ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.265 [INFO][3970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.267 [INFO][3970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0", GenerateName:"whisker-574d6dd485-", Namespace:"calico-system", SelfLink:"", UID:"021e5f62-a2d9-4eaa-9de5-a11c3c1834c0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574d6dd485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece", Pod:"whisker-574d6dd485-jzgw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5d46838471", MAC:"ae:0b:b9:1d:43:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:52.319349 containerd[1599]: 2025-09-12 17:14:52.310 [INFO][3970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece" Namespace="calico-system" Pod="whisker-574d6dd485-jzgw2" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--574d6dd485--jzgw2-eth0" Sep 12 17:14:52.435906 containerd[1599]: time="2025-09-12T17:14:52.435342518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:52.435906 containerd[1599]: time="2025-09-12T17:14:52.435437551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:52.435906 containerd[1599]: time="2025-09-12T17:14:52.435449590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:52.435906 containerd[1599]: time="2025-09-12T17:14:52.435622456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:52.526872 containerd[1599]: time="2025-09-12T17:14:52.526140172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574d6dd485-jzgw2,Uid:021e5f62-a2d9-4eaa-9de5-a11c3c1834c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece\"" Sep 12 17:14:52.531592 containerd[1599]: time="2025-09-12T17:14:52.530492908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:14:52.550759 kernel: bpftool[4143]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:14:52.744582 systemd-networkd[1243]: vxlan.calico: Link UP Sep 12 17:14:52.744595 systemd-networkd[1243]: vxlan.calico: Gained carrier Sep 12 17:14:53.277761 kubelet[2755]: I0912 17:14:53.277561 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dec4ba3-6627-4db6-bbf3-34ddc1d002e0" path="/var/lib/kubelet/pods/6dec4ba3-6627-4db6-bbf3-34ddc1d002e0/volumes" Sep 12 17:14:54.204339 systemd-networkd[1243]: calia5d46838471: Gained IPv6LL Sep 12 17:14:54.540256 containerd[1599]: time="2025-09-12T17:14:54.539273752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:54.541031 containerd[1599]: time="2025-09-12T17:14:54.540917598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:14:54.541975 containerd[1599]: time="2025-09-12T17:14:54.541937647Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:54.545611 containerd[1599]: time="2025-09-12T17:14:54.545550196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:54.548025 containerd[1599]: time="2025-09-12T17:14:54.547967669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.017434844s" Sep 12 17:14:54.548025 containerd[1599]: time="2025-09-12T17:14:54.548019545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:14:54.558347 containerd[1599]: time="2025-09-12T17:14:54.558202598Z" level=info msg="CreateContainer within sandbox \"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:14:54.579580 containerd[1599]: time="2025-09-12T17:14:54.579527758Z" level=info msg="CreateContainer within sandbox \"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"55d7ba8d70fdd81839222d15c00fe8046d9533b08e0f9dc0903e6efea753c014\"" Sep 12 17:14:54.582395 containerd[1599]: time="2025-09-12T17:14:54.582296286Z" level=info msg="StartContainer for \"55d7ba8d70fdd81839222d15c00fe8046d9533b08e0f9dc0903e6efea753c014\"" Sep 12 17:14:54.651084 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL Sep 12 17:14:54.656999 containerd[1599]: time="2025-09-12T17:14:54.656810274Z" level=info msg="StartContainer for \"55d7ba8d70fdd81839222d15c00fe8046d9533b08e0f9dc0903e6efea753c014\" returns successfully" Sep 12 17:14:54.659951 containerd[1599]: time="2025-09-12T17:14:54.659906219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:14:56.288911 containerd[1599]: time="2025-09-12T17:14:56.288228541Z" level=info msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.386 [INFO][4265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.387 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" iface="eth0" netns="/var/run/netns/cni-c4b9d60b-547e-b54b-14b9-b9038490f470" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.389 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" iface="eth0" netns="/var/run/netns/cni-c4b9d60b-547e-b54b-14b9-b9038490f470" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.391 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" iface="eth0" netns="/var/run/netns/cni-c4b9d60b-547e-b54b-14b9-b9038490f470" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.391 [INFO][4265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.391 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.424 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.425 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.425 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.440 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.440 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.444 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:56.457181 containerd[1599]: 2025-09-12 17:14:56.451 [INFO][4265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:14:56.458427 containerd[1599]: time="2025-09-12T17:14:56.458077663Z" level=info msg="TearDown network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" successfully" Sep 12 17:14:56.458427 containerd[1599]: time="2025-09-12T17:14:56.458113421Z" level=info msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" returns successfully" Sep 12 17:14:56.461700 systemd[1]: run-netns-cni\x2dc4b9d60b\x2d547e\x2db54b\x2d14b9\x2db9038490f470.mount: Deactivated successfully. Sep 12 17:14:56.482932 containerd[1599]: time="2025-09-12T17:14:56.482888087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6k4l,Uid:64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0,Namespace:calico-system,Attempt:1,}" Sep 12 17:14:56.710256 systemd-networkd[1243]: cali14969e26624: Link UP Sep 12 17:14:56.712902 systemd-networkd[1243]: cali14969e26624: Gained carrier Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.572 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0 goldmane-7988f88666- calico-system 64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0 895 0 2025-09-12 17:14:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 goldmane-7988f88666-q6k4l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali14969e26624 [] [] }} ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.573 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.616 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" HandleID="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.617 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" HandleID="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"goldmane-7988f88666-q6k4l", "timestamp":"2025-09-12 17:14:56.616805455 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.617 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.617 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.617 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.640 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.653 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.661 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.665 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.671 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.671 [INFO][4295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.675 [INFO][4295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94 Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.681 [INFO][4295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.693 [INFO][4295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.130/26] block=192.168.9.128/26 handle="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.693 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.130/26] handle="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.693 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:56.754766 containerd[1599]: 2025-09-12 17:14:56.693 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.130/26] IPv6=[] ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" HandleID="k8s-pod-network.d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.699 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"goldmane-7988f88666-q6k4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14969e26624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.700 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.130/32] ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.700 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14969e26624 ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.713 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.716 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94", Pod:"goldmane-7988f88666-q6k4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14969e26624", MAC:"be:c0:a0:70:97:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:56.760403 containerd[1599]: 2025-09-12 17:14:56.745 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94" Namespace="calico-system" Pod="goldmane-7988f88666-q6k4l" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:14:56.793407 containerd[1599]: time="2025-09-12T17:14:56.793089749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:56.793407 containerd[1599]: time="2025-09-12T17:14:56.793156105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:56.793407 containerd[1599]: time="2025-09-12T17:14:56.793171745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.795276 containerd[1599]: time="2025-09-12T17:14:56.795018433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.883055 containerd[1599]: time="2025-09-12T17:14:56.882990931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6k4l,Uid:64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94\"" Sep 12 17:14:57.117399 containerd[1599]: time="2025-09-12T17:14:57.117204760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:57.119712 containerd[1599]: time="2025-09-12T17:14:57.119554748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:14:57.120908 containerd[1599]: time="2025-09-12T17:14:57.120232190Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:57.122667 containerd[1599]: time="2025-09-12T17:14:57.122410908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:57.140810 containerd[1599]: time="2025-09-12T17:14:57.140351065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.48039513s" Sep 12 17:14:57.140810 containerd[1599]: time="2025-09-12T17:14:57.140410582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:14:57.143012 containerd[1599]: time="2025-09-12T17:14:57.142787569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:14:57.144268 containerd[1599]: time="2025-09-12T17:14:57.144228728Z" level=info msg="CreateContainer within sandbox \"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:14:57.164740 containerd[1599]: time="2025-09-12T17:14:57.163169029Z" level=info msg="CreateContainer within sandbox \"75e22573e6665688478ff6a06f7e4cff1f7a47e1bfe9e5ae4deea8271baebece\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"87a950d323703b6115ac7f94d474f88376ffa83b8db9f39bf596ffa48a353b8e\"" Sep 12 17:14:57.165850 containerd[1599]: time="2025-09-12T17:14:57.165148038Z" level=info msg="StartContainer for \"87a950d323703b6115ac7f94d474f88376ffa83b8db9f39bf596ffa48a353b8e\"" Sep 12 17:14:57.259007 containerd[1599]: time="2025-09-12T17:14:57.258888476Z" level=info msg="StartContainer for \"87a950d323703b6115ac7f94d474f88376ffa83b8db9f39bf596ffa48a353b8e\" returns successfully" Sep 12 17:14:57.280410 containerd[1599]: time="2025-09-12T17:14:57.280306398Z" level=info msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.358 [INFO][4401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.359 [INFO][4401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" iface="eth0" netns="/var/run/netns/cni-947203f1-e4df-4686-24c8-7a1f8e83c9f1" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.359 [INFO][4401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" iface="eth0" netns="/var/run/netns/cni-947203f1-e4df-4686-24c8-7a1f8e83c9f1" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.359 [INFO][4401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" iface="eth0" netns="/var/run/netns/cni-947203f1-e4df-4686-24c8-7a1f8e83c9f1" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.359 [INFO][4401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.359 [INFO][4401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.382 [INFO][4412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.382 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.382 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.399 [WARNING][4412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.399 [INFO][4412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.402 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:57.405720 containerd[1599]: 2025-09-12 17:14:57.403 [INFO][4401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:14:57.406936 containerd[1599]: time="2025-09-12T17:14:57.405907215Z" level=info msg="TearDown network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" successfully" Sep 12 17:14:57.406936 containerd[1599]: time="2025-09-12T17:14:57.405972051Z" level=info msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" returns successfully" Sep 12 17:14:57.406936 containerd[1599]: time="2025-09-12T17:14:57.406679371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f64c47db9-x5xkm,Uid:924e4076-11ef-4908-9f70-45486e09017c,Namespace:calico-system,Attempt:1,}" Sep 12 17:14:57.456572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272187961.mount: Deactivated successfully. Sep 12 17:14:57.457059 systemd[1]: run-netns-cni\x2d947203f1\x2de4df\x2d4686\x2d24c8\x2d7a1f8e83c9f1.mount: Deactivated successfully. Sep 12 17:14:57.580518 systemd-networkd[1243]: cali01e1461e571: Link UP Sep 12 17:14:57.583756 systemd-networkd[1243]: cali01e1461e571: Gained carrier Sep 12 17:14:57.634437 kubelet[2755]: I0912 17:14:57.634041 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-574d6dd485-jzgw2" podStartSLOduration=2.022251139 podStartE2EDuration="6.634015778s" podCreationTimestamp="2025-09-12 17:14:51 +0000 UTC" firstStartedPulling="2025-09-12 17:14:52.530000307 +0000 UTC m=+39.410807793" lastFinishedPulling="2025-09-12 17:14:57.141764946 +0000 UTC m=+44.022572432" observedRunningTime="2025-09-12 17:14:57.595272625 +0000 UTC m=+44.476080151" watchObservedRunningTime="2025-09-12 17:14:57.634015778 +0000 UTC m=+44.514823264" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.470 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0 calico-kube-controllers-7f64c47db9- calico-system 924e4076-11ef-4908-9f70-45486e09017c 906 0 2025-09-12 17:14:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f64c47db9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 calico-kube-controllers-7f64c47db9-x5xkm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali01e1461e571 [] [] }} ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.471 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.501 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" HandleID="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.501 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" HandleID="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"calico-kube-controllers-7f64c47db9-x5xkm", "timestamp":"2025-09-12 17:14:57.501221764 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.501 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.501 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.501 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.513 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.522 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.532 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.538 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.545 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.545 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.550 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.558 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.568 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.131/26] block=192.168.9.128/26 handle="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.568 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.131/26] handle="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.569 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:57.635590 containerd[1599]: 2025-09-12 17:14:57.569 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.131/26] IPv6=[] ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" HandleID="k8s-pod-network.e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.575 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0", GenerateName:"calico-kube-controllers-7f64c47db9-", Namespace:"calico-system", SelfLink:"", UID:"924e4076-11ef-4908-9f70-45486e09017c", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f64c47db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"calico-kube-controllers-7f64c47db9-x5xkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1461e571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.576 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.131/32] ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.576 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01e1461e571 ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.585 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.600 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0", GenerateName:"calico-kube-controllers-7f64c47db9-", Namespace:"calico-system", SelfLink:"", UID:"924e4076-11ef-4908-9f70-45486e09017c", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f64c47db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f", Pod:"calico-kube-controllers-7f64c47db9-x5xkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1461e571", MAC:"c6:8e:04:20:0b:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:57.636487 containerd[1599]: 2025-09-12 17:14:57.631 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f" Namespace="calico-system" Pod="calico-kube-controllers-7f64c47db9-x5xkm" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:14:57.667169 containerd[1599]: time="2025-09-12T17:14:57.666357369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:57.667169 containerd[1599]: time="2025-09-12T17:14:57.667092568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:57.667169 containerd[1599]: time="2025-09-12T17:14:57.667150405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:57.668626 containerd[1599]: time="2025-09-12T17:14:57.667305916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:57.735510 containerd[1599]: time="2025-09-12T17:14:57.735464865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f64c47db9-x5xkm,Uid:924e4076-11ef-4908-9f70-45486e09017c,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f\"" Sep 12 17:14:57.925852 kubelet[2755]: I0912 17:14:57.925661 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:58.274973 containerd[1599]: time="2025-09-12T17:14:58.274316084Z" level=info msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.350 [INFO][4549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.352 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" iface="eth0" netns="/var/run/netns/cni-36b5abd7-e541-922b-efe4-2b8982edec32" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.353 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" iface="eth0" netns="/var/run/netns/cni-36b5abd7-e541-922b-efe4-2b8982edec32" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.354 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" iface="eth0" netns="/var/run/netns/cni-36b5abd7-e541-922b-efe4-2b8982edec32" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.354 [INFO][4549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.354 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.381 [INFO][4558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.381 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.381 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.392 [WARNING][4558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.392 [INFO][4558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.395 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:58.399688 containerd[1599]: 2025-09-12 17:14:58.397 [INFO][4549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:14:58.400787 containerd[1599]: time="2025-09-12T17:14:58.399986946Z" level=info msg="TearDown network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" successfully" Sep 12 17:14:58.400787 containerd[1599]: time="2025-09-12T17:14:58.400027744Z" level=info msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" returns successfully" Sep 12 17:14:58.402513 containerd[1599]: time="2025-09-12T17:14:58.402032800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62n5m,Uid:3eea0a07-67ac-4783-b82b-7eb9a72b754d,Namespace:calico-system,Attempt:1,}" Sep 12 17:14:58.453940 systemd[1]: run-netns-cni\x2d36b5abd7\x2de541\x2d922b\x2defe4\x2d2b8982edec32.mount: Deactivated successfully. Sep 12 17:14:58.638484 systemd-networkd[1243]: cali20a19bd2c97: Link UP Sep 12 17:14:58.639751 systemd-networkd[1243]: cali20a19bd2c97: Gained carrier Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.483 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0 csi-node-driver- calico-system 3eea0a07-67ac-4783-b82b-7eb9a72b754d 920 0 2025-09-12 17:14:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 csi-node-driver-62n5m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali20a19bd2c97 [] [] }} ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.483 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.533 [INFO][4576] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" HandleID="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.534 [INFO][4576] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" HandleID="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"csi-node-driver-62n5m", "timestamp":"2025-09-12 17:14:58.533477884 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.535 [INFO][4576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.535 [INFO][4576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.535 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.555 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.568 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.586 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.590 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.595 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.595 [INFO][4576] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.600 [INFO][4576] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619 Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.614 [INFO][4576] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.630 [INFO][4576] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.132/26] block=192.168.9.128/26 handle="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.631 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.132/26] handle="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.631 [INFO][4576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:58.673157 containerd[1599]: 2025-09-12 17:14:58.631 [INFO][4576] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.132/26] IPv6=[] ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" HandleID="k8s-pod-network.7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.634 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eea0a07-67ac-4783-b82b-7eb9a72b754d", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"csi-node-driver-62n5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali20a19bd2c97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.634 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.132/32] ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.634 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20a19bd2c97 ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.641 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.642 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eea0a07-67ac-4783-b82b-7eb9a72b754d", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619", Pod:"csi-node-driver-62n5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali20a19bd2c97", MAC:"16:1d:26:97:5d:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:58.675323 containerd[1599]: 2025-09-12 17:14:58.669 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619" Namespace="calico-system" Pod="csi-node-driver-62n5m" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:14:58.727493 containerd[1599]: time="2025-09-12T17:14:58.727133551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:58.727493 containerd[1599]: time="2025-09-12T17:14:58.727205147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:58.727493 containerd[1599]: time="2025-09-12T17:14:58.727222146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:58.727493 containerd[1599]: time="2025-09-12T17:14:58.727330741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:58.747482 systemd-networkd[1243]: cali14969e26624: Gained IPv6LL Sep 12 17:14:58.808583 containerd[1599]: time="2025-09-12T17:14:58.808509543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62n5m,Uid:3eea0a07-67ac-4783-b82b-7eb9a72b754d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619\"" Sep 12 17:14:58.939237 systemd-networkd[1243]: cali01e1461e571: Gained IPv6LL Sep 12 17:14:59.205091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567539150.mount: Deactivated successfully. Sep 12 17:14:59.275009 containerd[1599]: time="2025-09-12T17:14:59.274601123Z" level=info msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" Sep 12 17:14:59.275261 containerd[1599]: time="2025-09-12T17:14:59.275232333Z" level=info msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.392 [INFO][4663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.392 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" iface="eth0" netns="/var/run/netns/cni-fd5e3516-ea47-6445-2ce5-823b24ca0c38" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.393 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" iface="eth0" netns="/var/run/netns/cni-fd5e3516-ea47-6445-2ce5-823b24ca0c38" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.393 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" iface="eth0" netns="/var/run/netns/cni-fd5e3516-ea47-6445-2ce5-823b24ca0c38" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.393 [INFO][4663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.393 [INFO][4663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.427 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.427 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.427 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.437 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.437 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.440 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:59.447590 containerd[1599]: 2025-09-12 17:14:59.442 [INFO][4663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:14:59.448605 containerd[1599]: time="2025-09-12T17:14:59.448473923Z" level=info msg="TearDown network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" successfully" Sep 12 17:14:59.448605 containerd[1599]: time="2025-09-12T17:14:59.448508041Z" level=info msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" returns successfully" Sep 12 17:14:59.452207 systemd[1]: run-netns-cni\x2dfd5e3516\x2dea47\x2d6445\x2d2ce5\x2d823b24ca0c38.mount: Deactivated successfully. Sep 12 17:14:59.457160 containerd[1599]: time="2025-09-12T17:14:59.456973398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-dmsh8,Uid:dd730022-9476-4c1a-8998-1f184b2b1808,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.378 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.380 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" iface="eth0" netns="/var/run/netns/cni-45f48f6c-4c4c-1593-2da0-151f0a594201" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.381 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" iface="eth0" netns="/var/run/netns/cni-45f48f6c-4c4c-1593-2da0-151f0a594201" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.383 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" iface="eth0" netns="/var/run/netns/cni-45f48f6c-4c4c-1593-2da0-151f0a594201" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.384 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.384 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.455 [INFO][4673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.455 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.455 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.470 [WARNING][4673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.470 [INFO][4673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.476 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:59.494535 containerd[1599]: 2025-09-12 17:14:59.488 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:14:59.499540 containerd[1599]: time="2025-09-12T17:14:59.497973766Z" level=info msg="TearDown network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" successfully" Sep 12 17:14:59.499540 containerd[1599]: time="2025-09-12T17:14:59.498020803Z" level=info msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" returns successfully" Sep 12 17:14:59.499540 containerd[1599]: time="2025-09-12T17:14:59.499279304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdw77,Uid:26a636e5-5783-4e4c-947c-929b9b44edd2,Namespace:kube-system,Attempt:1,}" Sep 12 17:14:59.499192 systemd[1]: run-netns-cni\x2d45f48f6c\x2d4c4c\x2d1593\x2d2da0\x2d151f0a594201.mount: Deactivated successfully. Sep 12 17:14:59.719993 systemd-networkd[1243]: cali4944fd4da81: Link UP Sep 12 17:14:59.725803 systemd-networkd[1243]: cali4944fd4da81: Gained carrier Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.563 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0 calico-apiserver-855468fc96- calico-apiserver dd730022-9476-4c1a-8998-1f184b2b1808 930 0 2025-09-12 17:14:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855468fc96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 calico-apiserver-855468fc96-dmsh8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4944fd4da81 [] [] }} ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.564 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.633 [INFO][4713] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" HandleID="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.636 [INFO][4713] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" HandleID="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"calico-apiserver-855468fc96-dmsh8", "timestamp":"2025-09-12 17:14:59.633707542 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.637 [INFO][4713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.637 [INFO][4713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.637 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.652 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.661 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.670 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.674 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.678 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.678 [INFO][4713] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.681 [INFO][4713] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5 Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.691 [INFO][4713] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4713] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.133/26] block=192.168.9.128/26 handle="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.133/26] handle="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:59.757320 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4713] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.133/26] IPv6=[] ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" HandleID="k8s-pod-network.222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.713 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd730022-9476-4c1a-8998-1f184b2b1808", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"calico-apiserver-855468fc96-dmsh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4944fd4da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.713 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.133/32] ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.713 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4944fd4da81 ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.728 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.728 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd730022-9476-4c1a-8998-1f184b2b1808", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5", Pod:"calico-apiserver-855468fc96-dmsh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4944fd4da81", MAC:"2e:e7:96:24:8d:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:59.759246 containerd[1599]: 2025-09-12 17:14:59.748 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-dmsh8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:14:59.813843 containerd[1599]: time="2025-09-12T17:14:59.813532859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:59.813843 containerd[1599]: time="2025-09-12T17:14:59.813597136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:59.814334 containerd[1599]: time="2025-09-12T17:14:59.813612815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:59.819275 containerd[1599]: time="2025-09-12T17:14:59.819109713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:59.836020 systemd-networkd[1243]: cali20a19bd2c97: Gained IPv6LL Sep 12 17:14:59.850356 systemd-networkd[1243]: calia158a15fcf5: Link UP Sep 12 17:14:59.850532 systemd-networkd[1243]: calia158a15fcf5: Gained carrier Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.587 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0 coredns-7c65d6cfc9- kube-system 26a636e5-5783-4e4c-947c-929b9b44edd2 929 0 2025-09-12 17:14:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 coredns-7c65d6cfc9-mdw77 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia158a15fcf5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.587 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.656 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" HandleID="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.658 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" HandleID="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"coredns-7c65d6cfc9-mdw77", "timestamp":"2025-09-12 17:14:59.656454019 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.658 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.705 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.756 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.771 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.782 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.788 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.795 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.795 [INFO][4719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.798 [INFO][4719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7 Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.812 [INFO][4719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.823 [INFO][4719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.134/26] block=192.168.9.128/26 handle="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.824 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.134/26] handle="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.824 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:14:59.894312 containerd[1599]: 2025-09-12 17:14:59.824 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.134/26] IPv6=[] ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" HandleID="k8s-pod-network.4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.840 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26a636e5-5783-4e4c-947c-929b9b44edd2", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"coredns-7c65d6cfc9-mdw77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia158a15fcf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.841 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.134/32] ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.841 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia158a15fcf5 ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.848 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.858 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26a636e5-5783-4e4c-947c-929b9b44edd2", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7", Pod:"coredns-7c65d6cfc9-mdw77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia158a15fcf5", MAC:"ea:2b:b5:56:e0:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:14:59.896850 containerd[1599]: 2025-09-12 17:14:59.884 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mdw77" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:14:59.960549 containerd[1599]: time="2025-09-12T17:14:59.960348588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-dmsh8,Uid:dd730022-9476-4c1a-8998-1f184b2b1808,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5\"" Sep 12 17:14:59.968859 containerd[1599]: time="2025-09-12T17:14:59.967158823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:59.968859 containerd[1599]: time="2025-09-12T17:14:59.967948266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:59.968859 containerd[1599]: time="2025-09-12T17:14:59.967963385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:59.968859 containerd[1599]: time="2025-09-12T17:14:59.968074300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:00.042476 containerd[1599]: time="2025-09-12T17:15:00.042428044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdw77,Uid:26a636e5-5783-4e4c-947c-929b9b44edd2,Namespace:kube-system,Attempt:1,} returns sandbox id \"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7\"" Sep 12 17:15:00.049511 containerd[1599]: time="2025-09-12T17:15:00.049454737Z" level=info msg="CreateContainer within sandbox \"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:15:00.072238 containerd[1599]: time="2025-09-12T17:15:00.072060790Z" level=info msg="CreateContainer within sandbox \"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ae703325dafbb67102220b23902cc1991f2383a213e24ecd0540f193a20ae44\"" Sep 12 17:15:00.073157 containerd[1599]: time="2025-09-12T17:15:00.073109384Z" level=info msg="StartContainer for \"0ae703325dafbb67102220b23902cc1991f2383a213e24ecd0540f193a20ae44\"" Sep 12 17:15:00.157521 containerd[1599]: time="2025-09-12T17:15:00.157314228Z" level=info msg="StartContainer for \"0ae703325dafbb67102220b23902cc1991f2383a213e24ecd0540f193a20ae44\" returns successfully" Sep 12 17:15:00.182507 containerd[1599]: time="2025-09-12T17:15:00.181549010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:00.183909 containerd[1599]: time="2025-09-12T17:15:00.183870948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:15:00.185108 containerd[1599]: time="2025-09-12T17:15:00.185016578Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:00.188595 containerd[1599]: time="2025-09-12T17:15:00.188399550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:00.190200 containerd[1599]: time="2025-09-12T17:15:00.190095836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.04726347s" Sep 12 17:15:00.190200 containerd[1599]: time="2025-09-12T17:15:00.190140874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:15:00.192067 containerd[1599]: time="2025-09-12T17:15:00.192020792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:15:00.196407 containerd[1599]: time="2025-09-12T17:15:00.196227249Z" level=info msg="CreateContainer within sandbox \"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:15:00.214435 containerd[1599]: time="2025-09-12T17:15:00.214090309Z" level=info msg="CreateContainer within sandbox \"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"67e0145b7ff035ba168a17903a070f7359954ad1a52ac5e9874db2d77b3ab444\"" Sep 12 17:15:00.217323 containerd[1599]: time="2025-09-12T17:15:00.216794231Z" level=info msg="StartContainer for \"67e0145b7ff035ba168a17903a070f7359954ad1a52ac5e9874db2d77b3ab444\"" Sep 12 17:15:00.273427 containerd[1599]: time="2025-09-12T17:15:00.273331442Z" level=info msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" Sep 12 17:15:00.281932 containerd[1599]: time="2025-09-12T17:15:00.279567650Z" level=info msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" Sep 12 17:15:00.389934 containerd[1599]: time="2025-09-12T17:15:00.388096031Z" level=info msg="StartContainer for \"67e0145b7ff035ba168a17903a070f7359954ad1a52ac5e9874db2d77b3ab444\" returns successfully" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.406 [INFO][4913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.409 [INFO][4913] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" iface="eth0" netns="/var/run/netns/cni-e1cd19e0-5c35-ac38-3c13-09ef3b55b0b9" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.410 [INFO][4913] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" iface="eth0" netns="/var/run/netns/cni-e1cd19e0-5c35-ac38-3c13-09ef3b55b0b9" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.412 [INFO][4913] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" iface="eth0" netns="/var/run/netns/cni-e1cd19e0-5c35-ac38-3c13-09ef3b55b0b9" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.412 [INFO][4913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.412 [INFO][4913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.479 [INFO][4938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.479 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.480 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.490 [WARNING][4938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.490 [INFO][4938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.494 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:00.502904 containerd[1599]: 2025-09-12 17:15:00.498 [INFO][4913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:00.507493 containerd[1599]: time="2025-09-12T17:15:00.504395714Z" level=info msg="TearDown network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" successfully" Sep 12 17:15:00.507493 containerd[1599]: time="2025-09-12T17:15:00.504429992Z" level=info msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" returns successfully" Sep 12 17:15:00.509139 containerd[1599]: time="2025-09-12T17:15:00.509088349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-2tm6q,Uid:f9d17caf-52ed-421a-8db2-79e3916a6185,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:15:00.510664 systemd[1]: run-netns-cni\x2de1cd19e0\x2d5c35\x2dac38\x2d3c13\x2d09ef3b55b0b9.mount: Deactivated successfully. Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.442 [INFO][4917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.443 [INFO][4917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" iface="eth0" netns="/var/run/netns/cni-b3cb1089-701b-4a29-3cfd-5c54f8fd2876" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.443 [INFO][4917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" iface="eth0" netns="/var/run/netns/cni-b3cb1089-701b-4a29-3cfd-5c54f8fd2876" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.444 [INFO][4917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" iface="eth0" netns="/var/run/netns/cni-b3cb1089-701b-4a29-3cfd-5c54f8fd2876" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.444 [INFO][4917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.444 [INFO][4917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.501 [INFO][4943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.501 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.501 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.522 [WARNING][4943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.522 [INFO][4943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.525 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:00.532169 containerd[1599]: 2025-09-12 17:15:00.529 [INFO][4917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:00.532646 containerd[1599]: time="2025-09-12T17:15:00.532191020Z" level=info msg="TearDown network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" successfully" Sep 12 17:15:00.532646 containerd[1599]: time="2025-09-12T17:15:00.532238658Z" level=info msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" returns successfully" Sep 12 17:15:00.536211 containerd[1599]: time="2025-09-12T17:15:00.536154207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-spslt,Uid:0c3998a3-0465-446a-b912-478d1f626fde,Namespace:kube-system,Attempt:1,}" Sep 12 17:15:00.536747 systemd[1]: run-netns-cni\x2db3cb1089\x2d701b\x2d4a29\x2d3cfd\x2d5c54f8fd2876.mount: Deactivated successfully. Sep 12 17:15:00.634868 kubelet[2755]: I0912 17:15:00.632901 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mdw77" podStartSLOduration=41.632870664 podStartE2EDuration="41.632870664s" podCreationTimestamp="2025-09-12 17:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:00.630243419 +0000 UTC m=+47.511050985" watchObservedRunningTime="2025-09-12 17:15:00.632870664 +0000 UTC m=+47.513678150" Sep 12 17:15:00.703200 kubelet[2755]: I0912 17:15:00.703021 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-q6k4l" podStartSLOduration=20.39767708 podStartE2EDuration="23.702993323s" podCreationTimestamp="2025-09-12 17:14:37 +0000 UTC" firstStartedPulling="2025-09-12 17:14:56.886508518 +0000 UTC m=+43.767316044" lastFinishedPulling="2025-09-12 17:15:00.191824761 +0000 UTC m=+47.072632287" observedRunningTime="2025-09-12 17:15:00.699116612 +0000 UTC m=+47.579924138" watchObservedRunningTime="2025-09-12 17:15:00.702993323 +0000 UTC m=+47.583800849" Sep 12 17:15:00.834714 systemd-networkd[1243]: cali63786f71c17: Link UP Sep 12 17:15:00.837931 systemd-networkd[1243]: cali63786f71c17: Gained carrier Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.690 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0 calico-apiserver-855468fc96- calico-apiserver f9d17caf-52ed-421a-8db2-79e3916a6185 947 0 2025-09-12 17:14:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855468fc96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 calico-apiserver-855468fc96-2tm6q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63786f71c17 [] [] }} ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.693 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.746 [INFO][4984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" HandleID="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.746 [INFO][4984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" HandleID="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"calico-apiserver-855468fc96-2tm6q", "timestamp":"2025-09-12 17:15:00.746312871 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.746 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.746 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.746 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.762 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.770 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.783 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.786 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.789 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.789 [INFO][4984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.792 [INFO][4984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3 Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.799 [INFO][4984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.813 [INFO][4984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.135/26] block=192.168.9.128/26 handle="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.813 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.135/26] handle="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.814 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:00.867089 containerd[1599]: 2025-09-12 17:15:00.814 [INFO][4984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.135/26] IPv6=[] ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" HandleID="k8s-pod-network.619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.818 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d17caf-52ed-421a-8db2-79e3916a6185", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"calico-apiserver-855468fc96-2tm6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63786f71c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.819 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.135/32] ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.819 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63786f71c17 ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.839 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.840 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d17caf-52ed-421a-8db2-79e3916a6185", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3", Pod:"calico-apiserver-855468fc96-2tm6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63786f71c17", MAC:"8e:f3:2c:78:41:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:00.868051 containerd[1599]: 2025-09-12 17:15:00.860 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3" Namespace="calico-apiserver" Pod="calico-apiserver-855468fc96-2tm6q" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:00.893042 containerd[1599]: time="2025-09-12T17:15:00.891582208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:00.893042 containerd[1599]: time="2025-09-12T17:15:00.891655125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:00.893042 containerd[1599]: time="2025-09-12T17:15:00.891893715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:00.893042 containerd[1599]: time="2025-09-12T17:15:00.892459170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:00.960499 systemd-networkd[1243]: cali72fa3fe36a3: Link UP Sep 12 17:15:00.963763 systemd-networkd[1243]: cali72fa3fe36a3: Gained carrier Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.706 [INFO][4958] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0 coredns-7c65d6cfc9- kube-system 0c3998a3-0465-446a-b912-478d1f626fde 948 0 2025-09-12 17:14:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-0-ae88ce84d6 coredns-7c65d6cfc9-spslt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72fa3fe36a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.706 [INFO][4958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.759 [INFO][4990] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" HandleID="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.759 [INFO][4990] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" HandleID="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-0-ae88ce84d6", "pod":"coredns-7c65d6cfc9-spslt", "timestamp":"2025-09-12 17:15:00.759127512 +0000 UTC"}, Hostname:"ci-4081-3-6-0-ae88ce84d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.759 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.814 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.815 [INFO][4990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-0-ae88ce84d6' Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.870 [INFO][4990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.878 [INFO][4990] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.893 [INFO][4990] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.897 [INFO][4990] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.901 [INFO][4990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.901 [INFO][4990] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.905 [INFO][4990] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.915 [INFO][4990] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.928 [INFO][4990] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.136/26] block=192.168.9.128/26 handle="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.928 [INFO][4990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.136/26] handle="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" host="ci-4081-3-6-0-ae88ce84d6" Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.929 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:00.993704 containerd[1599]: 2025-09-12 17:15:00.929 [INFO][4990] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.136/26] IPv6=[] ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" HandleID="k8s-pod-network.0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.938 [INFO][4958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0c3998a3-0465-446a-b912-478d1f626fde", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"", Pod:"coredns-7c65d6cfc9-spslt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72fa3fe36a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.938 [INFO][4958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.136/32] ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.938 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72fa3fe36a3 ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.964 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.967 [INFO][4958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0c3998a3-0465-446a-b912-478d1f626fde", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e", Pod:"coredns-7c65d6cfc9-spslt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72fa3fe36a3", MAC:"52:9a:f0:52:54:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:00.994583 containerd[1599]: 2025-09-12 17:15:00.983 [INFO][4958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-spslt" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:01.011131 containerd[1599]: time="2025-09-12T17:15:01.010764243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855468fc96-2tm6q,Uid:f9d17caf-52ed-421a-8db2-79e3916a6185,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3\"" Sep 12 17:15:01.024815 containerd[1599]: time="2025-09-12T17:15:01.024678369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:01.025658 containerd[1599]: time="2025-09-12T17:15:01.024898040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:01.025658 containerd[1599]: time="2025-09-12T17:15:01.024951238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:01.026164 containerd[1599]: time="2025-09-12T17:15:01.025314143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:01.081992 containerd[1599]: time="2025-09-12T17:15:01.081950328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-spslt,Uid:0c3998a3-0465-446a-b912-478d1f626fde,Namespace:kube-system,Attempt:1,} returns sandbox id \"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e\"" Sep 12 17:15:01.091785 containerd[1599]: time="2025-09-12T17:15:01.091732498Z" level=info msg="CreateContainer within sandbox \"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:15:01.103747 containerd[1599]: time="2025-09-12T17:15:01.103665583Z" level=info msg="CreateContainer within sandbox \"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3b4722773dc46f6e479f13abbf3e6c6889052c930425a587e3ee48d9a790d68\"" Sep 12 17:15:01.105009 containerd[1599]: time="2025-09-12T17:15:01.104972251Z" level=info msg="StartContainer for \"a3b4722773dc46f6e479f13abbf3e6c6889052c930425a587e3ee48d9a790d68\"" Sep 12 17:15:01.160643 containerd[1599]: time="2025-09-12T17:15:01.160491360Z" level=info msg="StartContainer for \"a3b4722773dc46f6e479f13abbf3e6c6889052c930425a587e3ee48d9a790d68\" returns successfully" Sep 12 17:15:01.371148 systemd-networkd[1243]: cali4944fd4da81: Gained IPv6LL Sep 12 17:15:01.628621 systemd-networkd[1243]: calia158a15fcf5: Gained IPv6LL Sep 12 17:15:01.769616 kubelet[2755]: I0912 17:15:01.769537 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-spslt" podStartSLOduration=42.769514744 podStartE2EDuration="42.769514744s" podCreationTimestamp="2025-09-12 17:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:01.654072622 +0000 UTC m=+48.534880148" watchObservedRunningTime="2025-09-12 17:15:01.769514744 +0000 UTC m=+48.650322270" Sep 12 17:15:02.459481 systemd-networkd[1243]: cali63786f71c17: Gained IPv6LL Sep 12 17:15:02.717659 systemd-networkd[1243]: cali72fa3fe36a3: Gained IPv6LL Sep 12 17:15:04.390552 containerd[1599]: time="2025-09-12T17:15:04.390472714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:04.393219 containerd[1599]: time="2025-09-12T17:15:04.393161596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:15:04.398288 containerd[1599]: time="2025-09-12T17:15:04.397775702Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:04.404461 containerd[1599]: time="2025-09-12T17:15:04.404370911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:04.406331 containerd[1599]: time="2025-09-12T17:15:04.406267656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.214208225s" Sep 12 17:15:04.406331 containerd[1599]: time="2025-09-12T17:15:04.406318534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:15:04.414105 containerd[1599]: time="2025-09-12T17:15:04.412762227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:15:04.477500 containerd[1599]: time="2025-09-12T17:15:04.477439670Z" level=info msg="CreateContainer within sandbox \"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:15:04.517039 containerd[1599]: time="2025-09-12T17:15:04.516872685Z" level=info msg="CreateContainer within sandbox \"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"090b9f6faa18588856acfde44656417ee926086f8e05014784db1c5426fe6463\"" Sep 12 17:15:04.519263 containerd[1599]: time="2025-09-12T17:15:04.519223817Z" level=info msg="StartContainer for \"090b9f6faa18588856acfde44656417ee926086f8e05014784db1c5426fe6463\"" Sep 12 17:15:04.633728 containerd[1599]: time="2025-09-12T17:15:04.633667814Z" level=info msg="StartContainer for \"090b9f6faa18588856acfde44656417ee926086f8e05014784db1c5426fe6463\" returns successfully" Sep 12 17:15:05.712930 systemd[1]: run-containerd-runc-k8s.io-090b9f6faa18588856acfde44656417ee926086f8e05014784db1c5426fe6463-runc.QAzsBS.mount: Deactivated successfully. Sep 12 17:15:05.852146 kubelet[2755]: I0912 17:15:05.851647 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f64c47db9-x5xkm" podStartSLOduration=22.179982625 podStartE2EDuration="28.851613693s" podCreationTimestamp="2025-09-12 17:14:37 +0000 UTC" firstStartedPulling="2025-09-12 17:14:57.737737498 +0000 UTC m=+44.618545024" lastFinishedPulling="2025-09-12 17:15:04.409368566 +0000 UTC m=+51.290176092" observedRunningTime="2025-09-12 17:15:05.679938776 +0000 UTC m=+52.560746302" watchObservedRunningTime="2025-09-12 17:15:05.851613693 +0000 UTC m=+52.732421219" Sep 12 17:15:06.266919 containerd[1599]: time="2025-09-12T17:15:06.265796455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:06.268659 containerd[1599]: time="2025-09-12T17:15:06.268611752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:15:06.270116 containerd[1599]: time="2025-09-12T17:15:06.270029000Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:06.272550 containerd[1599]: time="2025-09-12T17:15:06.272512465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:06.273616 containerd[1599]: time="2025-09-12T17:15:06.273574881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.860764695s" Sep 12 17:15:06.273743 containerd[1599]: time="2025-09-12T17:15:06.273727318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:15:06.278852 containerd[1599]: time="2025-09-12T17:15:06.278424493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:15:06.279963 containerd[1599]: time="2025-09-12T17:15:06.279921699Z" level=info msg="CreateContainer within sandbox \"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:15:06.391946 containerd[1599]: time="2025-09-12T17:15:06.391053252Z" level=info msg="CreateContainer within sandbox \"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0cdf420c1b1398afaf6808046c9fa3c9474b1b282f40794689ccd30b7b50532a\"" Sep 12 17:15:06.396113 containerd[1599]: time="2025-09-12T17:15:06.395648309Z" level=info msg="StartContainer for \"0cdf420c1b1398afaf6808046c9fa3c9474b1b282f40794689ccd30b7b50532a\"" Sep 12 17:15:06.637978 containerd[1599]: time="2025-09-12T17:15:06.637780090Z" level=info msg="StartContainer for \"0cdf420c1b1398afaf6808046c9fa3c9474b1b282f40794689ccd30b7b50532a\" returns successfully" Sep 12 17:15:06.698362 systemd[1]: run-containerd-runc-k8s.io-0cdf420c1b1398afaf6808046c9fa3c9474b1b282f40794689ccd30b7b50532a-runc.1mhZEf.mount: Deactivated successfully. Sep 12 17:15:10.118719 containerd[1599]: time="2025-09-12T17:15:10.116357087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:10.118719 containerd[1599]: time="2025-09-12T17:15:10.117618594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:15:10.119615 containerd[1599]: time="2025-09-12T17:15:10.119575374Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:10.122249 containerd[1599]: time="2025-09-12T17:15:10.122185947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:10.123059 containerd[1599]: time="2025-09-12T17:15:10.123015378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.844500728s" Sep 12 17:15:10.123059 containerd[1599]: time="2025-09-12T17:15:10.123055098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:15:10.125678 containerd[1599]: time="2025-09-12T17:15:10.125537272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:15:10.127074 containerd[1599]: time="2025-09-12T17:15:10.127026417Z" level=info msg="CreateContainer within sandbox \"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:15:10.164368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84961461.mount: Deactivated successfully. Sep 12 17:15:10.169763 containerd[1599]: time="2025-09-12T17:15:10.169589819Z" level=info msg="CreateContainer within sandbox \"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e9a2af197b73ed368fee72516f9c0295a4e6e380b33d1815677f468b3d31440\"" Sep 12 17:15:10.171615 containerd[1599]: time="2025-09-12T17:15:10.170410611Z" level=info msg="StartContainer for \"6e9a2af197b73ed368fee72516f9c0295a4e6e380b33d1815677f468b3d31440\"" Sep 12 17:15:10.265501 containerd[1599]: time="2025-09-12T17:15:10.265446553Z" level=info msg="StartContainer for \"6e9a2af197b73ed368fee72516f9c0295a4e6e380b33d1815677f468b3d31440\" returns successfully" Sep 12 17:15:10.522307 containerd[1599]: time="2025-09-12T17:15:10.522243872Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:10.523777 containerd[1599]: time="2025-09-12T17:15:10.523729777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:15:10.529379 containerd[1599]: time="2025-09-12T17:15:10.529215800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 403.540129ms" Sep 12 17:15:10.529379 containerd[1599]: time="2025-09-12T17:15:10.529270640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:15:10.535672 containerd[1599]: time="2025-09-12T17:15:10.535574695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:15:10.539227 containerd[1599]: time="2025-09-12T17:15:10.539163338Z" level=info msg="CreateContainer within sandbox \"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:15:10.566730 containerd[1599]: time="2025-09-12T17:15:10.566660815Z" level=info msg="CreateContainer within sandbox \"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a994a8b172eeb0d8568d42d7aa06f2c0e327bd2f8786eb437e0a664085442e9\"" Sep 12 17:15:10.570091 containerd[1599]: time="2025-09-12T17:15:10.570036740Z" level=info msg="StartContainer for \"7a994a8b172eeb0d8568d42d7aa06f2c0e327bd2f8786eb437e0a664085442e9\"" Sep 12 17:15:10.752794 kubelet[2755]: I0912 17:15:10.751251 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855468fc96-dmsh8" podStartSLOduration=29.590440027 podStartE2EDuration="39.751231517s" podCreationTimestamp="2025-09-12 17:14:31 +0000 UTC" firstStartedPulling="2025-09-12 17:14:59.963553835 +0000 UTC m=+46.844361361" lastFinishedPulling="2025-09-12 17:15:10.124345325 +0000 UTC m=+57.005152851" observedRunningTime="2025-09-12 17:15:10.750780881 +0000 UTC m=+57.631588407" watchObservedRunningTime="2025-09-12 17:15:10.751231517 +0000 UTC m=+57.632039003" Sep 12 17:15:10.850822 containerd[1599]: time="2025-09-12T17:15:10.850442096Z" level=info msg="StartContainer for \"7a994a8b172eeb0d8568d42d7aa06f2c0e327bd2f8786eb437e0a664085442e9\" returns successfully" Sep 12 17:15:11.752905 kubelet[2755]: I0912 17:15:11.744652 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855468fc96-2tm6q" podStartSLOduration=31.225611148 podStartE2EDuration="40.744630693s" podCreationTimestamp="2025-09-12 17:14:31 +0000 UTC" firstStartedPulling="2025-09-12 17:15:01.013235424 +0000 UTC m=+47.894042950" lastFinishedPulling="2025-09-12 17:15:10.532254969 +0000 UTC m=+57.413062495" observedRunningTime="2025-09-12 17:15:11.727404702 +0000 UTC m=+58.608212228" watchObservedRunningTime="2025-09-12 17:15:11.744630693 +0000 UTC m=+58.625438179" Sep 12 17:15:12.986897 containerd[1599]: time="2025-09-12T17:15:12.985024975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:12.987652 containerd[1599]: time="2025-09-12T17:15:12.987611603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:15:12.988962 containerd[1599]: time="2025-09-12T17:15:12.988143120Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:12.992271 containerd[1599]: time="2025-09-12T17:15:12.992223781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:12.993116 containerd[1599]: time="2025-09-12T17:15:12.992994217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.457362123s" Sep 12 17:15:12.993881 containerd[1599]: time="2025-09-12T17:15:12.993853693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:15:13.012854 containerd[1599]: time="2025-09-12T17:15:13.012779191Z" level=info msg="CreateContainer within sandbox \"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:15:13.035223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371442735.mount: Deactivated successfully. Sep 12 17:15:13.036576 containerd[1599]: time="2025-09-12T17:15:13.035906181Z" level=info msg="CreateContainer within sandbox \"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0de1c4e0ae93eb97d8490430e201e33f1e81e4181176671bc7b302a7bdeee102\"" Sep 12 17:15:13.037638 containerd[1599]: time="2025-09-12T17:15:13.037319018Z" level=info msg="StartContainer for \"0de1c4e0ae93eb97d8490430e201e33f1e81e4181176671bc7b302a7bdeee102\"" Sep 12 17:15:13.106982 systemd[1]: run-containerd-runc-k8s.io-0de1c4e0ae93eb97d8490430e201e33f1e81e4181176671bc7b302a7bdeee102-runc.o7o0d9.mount: Deactivated successfully. Sep 12 17:15:13.271473 containerd[1599]: time="2025-09-12T17:15:13.270228992Z" level=info msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" Sep 12 17:15:13.273591 containerd[1599]: time="2025-09-12T17:15:13.273282585Z" level=info msg="StartContainer for \"0de1c4e0ae93eb97d8490430e201e33f1e81e4181176671bc7b302a7bdeee102\" returns successfully" Sep 12 17:15:13.409851 kubelet[2755]: I0912 17:15:13.409786 2755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:15:13.419496 kubelet[2755]: I0912 17:15:13.418079 2755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.349 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0c3998a3-0465-446a-b912-478d1f626fde", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e", Pod:"coredns-7c65d6cfc9-spslt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72fa3fe36a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.350 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.350 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" iface="eth0" netns="" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.350 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.350 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.400 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.402 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.402 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.417 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.417 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.419 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:13.432075 containerd[1599]: 2025-09-12 17:15:13.428 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.432532 containerd[1599]: time="2025-09-12T17:15:13.432191960Z" level=info msg="TearDown network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" successfully" Sep 12 17:15:13.432532 containerd[1599]: time="2025-09-12T17:15:13.432238960Z" level=info msg="StopPodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" returns successfully" Sep 12 17:15:13.434880 containerd[1599]: time="2025-09-12T17:15:13.432954319Z" level=info msg="RemovePodSandbox for \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" Sep 12 17:15:13.434880 containerd[1599]: time="2025-09-12T17:15:13.432995639Z" level=info msg="Forcibly stopping sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\"" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.487 [WARNING][5449] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0c3998a3-0465-446a-b912-478d1f626fde", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"0fdf8b6ad90027acec1e545aeec9fbd548be01e024f1eb2c590c228a9ea3061e", Pod:"coredns-7c65d6cfc9-spslt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72fa3fe36a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.488 [INFO][5449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.488 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" iface="eth0" netns="" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.488 [INFO][5449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.488 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.525 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.525 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.525 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.538 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.539 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" HandleID="k8s-pod-network.915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--spslt-eth0" Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.541 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:13.552073 containerd[1599]: 2025-09-12 17:15:13.547 [INFO][5449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135" Sep 12 17:15:13.552073 containerd[1599]: time="2025-09-12T17:15:13.552048100Z" level=info msg="TearDown network for sandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" successfully" Sep 12 17:15:13.565457 containerd[1599]: time="2025-09-12T17:15:13.565205432Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:13.565457 containerd[1599]: time="2025-09-12T17:15:13.565305591Z" level=info msg="RemovePodSandbox \"915e435c98ce6edd79ef1b4e2492b2f8a0b32ef444f37ec20659a70c04afc135\" returns successfully" Sep 12 17:15:13.572612 containerd[1599]: time="2025-09-12T17:15:13.570348061Z" level=info msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" Sep 12 17:15:13.773859 kubelet[2755]: I0912 17:15:13.771152 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:15:13.803940 kubelet[2755]: I0912 17:15:13.803087 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-62n5m" podStartSLOduration=22.618180485 podStartE2EDuration="36.803064635s" podCreationTimestamp="2025-09-12 17:14:37 +0000 UTC" firstStartedPulling="2025-09-12 17:14:58.811029733 +0000 UTC m=+45.691837259" lastFinishedPulling="2025-09-12 17:15:12.995913923 +0000 UTC m=+59.876721409" observedRunningTime="2025-09-12 17:15:13.801213439 +0000 UTC m=+60.682020965" watchObservedRunningTime="2025-09-12 17:15:13.803064635 +0000 UTC m=+60.683872161" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.635 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0", GenerateName:"calico-kube-controllers-7f64c47db9-", Namespace:"calico-system", SelfLink:"", UID:"924e4076-11ef-4908-9f70-45486e09017c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f64c47db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f", Pod:"calico-kube-controllers-7f64c47db9-x5xkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1461e571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.635 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.635 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" iface="eth0" netns="" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.635 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.635 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.749 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.749 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.749 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.784 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.784 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.801 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:13.822112 containerd[1599]: 2025-09-12 17:15:13.817 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.822541 containerd[1599]: time="2025-09-12T17:15:13.822144954Z" level=info msg="TearDown network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" successfully" Sep 12 17:15:13.822541 containerd[1599]: time="2025-09-12T17:15:13.822172514Z" level=info msg="StopPodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" returns successfully" Sep 12 17:15:13.825689 containerd[1599]: time="2025-09-12T17:15:13.825642146Z" level=info msg="RemovePodSandbox for \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" Sep 12 17:15:13.825689 containerd[1599]: time="2025-09-12T17:15:13.825690626Z" level=info msg="Forcibly stopping sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\"" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.909 [WARNING][5492] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0", GenerateName:"calico-kube-controllers-7f64c47db9-", Namespace:"calico-system", SelfLink:"", UID:"924e4076-11ef-4908-9f70-45486e09017c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f64c47db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"e0b3dd9d8779fbf3a7c4afda66e6fd4dce585f4285f265c4979df190b0b1b37f", Pod:"calico-kube-controllers-7f64c47db9-x5xkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1461e571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.912 [INFO][5492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.912 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" iface="eth0" netns="" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.912 [INFO][5492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.912 [INFO][5492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.946 [INFO][5499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.946 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.947 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.958 [WARNING][5499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.958 [INFO][5499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" HandleID="k8s-pod-network.e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--kube--controllers--7f64c47db9--x5xkm-eth0" Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.961 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:13.966662 containerd[1599]: 2025-09-12 17:15:13.964 [INFO][5492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09" Sep 12 17:15:13.968327 containerd[1599]: time="2025-09-12T17:15:13.967766798Z" level=info msg="TearDown network for sandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" successfully" Sep 12 17:15:13.985172 containerd[1599]: time="2025-09-12T17:15:13.985057840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:13.985843 containerd[1599]: time="2025-09-12T17:15:13.985382520Z" level=info msg="RemovePodSandbox \"e7daf2ea5dc2d79c7f7d05f059d5e4a8b31a4d0338e9276efd3dc142d3d60f09\" returns successfully" Sep 12 17:15:13.986348 containerd[1599]: time="2025-09-12T17:15:13.986061798Z" level=info msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.044 [WARNING][5513] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26a636e5-5783-4e4c-947c-929b9b44edd2", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7", Pod:"coredns-7c65d6cfc9-mdw77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia158a15fcf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.044 [INFO][5513] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.044 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" iface="eth0" netns="" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.045 [INFO][5513] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.045 [INFO][5513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.070 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.070 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.070 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.080 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.081 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.083 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.086334 containerd[1599]: 2025-09-12 17:15:14.084 [INFO][5513] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.088633 containerd[1599]: time="2025-09-12T17:15:14.087222635Z" level=info msg="TearDown network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" successfully" Sep 12 17:15:14.088633 containerd[1599]: time="2025-09-12T17:15:14.087265115Z" level=info msg="StopPodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" returns successfully" Sep 12 17:15:14.088633 containerd[1599]: time="2025-09-12T17:15:14.088270075Z" level=info msg="RemovePodSandbox for \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" Sep 12 17:15:14.088633 containerd[1599]: time="2025-09-12T17:15:14.088325716Z" level=info msg="Forcibly stopping sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\"" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.146 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26a636e5-5783-4e4c-947c-929b9b44edd2", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"4690825add46096b4c63ebb6f7b0d0f89d3df77375df2a939a6d8b3e4b13ede7", Pod:"coredns-7c65d6cfc9-mdw77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia158a15fcf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.146 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.146 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" iface="eth0" netns="" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.146 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.146 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.170 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.170 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.171 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.185 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.185 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" HandleID="k8s-pod-network.ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-coredns--7c65d6cfc9--mdw77-eth0" Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.190 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.205294 containerd[1599]: 2025-09-12 17:15:14.197 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52" Sep 12 17:15:14.205294 containerd[1599]: time="2025-09-12T17:15:14.205175798Z" level=info msg="TearDown network for sandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" successfully" Sep 12 17:15:14.212314 containerd[1599]: time="2025-09-12T17:15:14.211422241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:14.212314 containerd[1599]: time="2025-09-12T17:15:14.211560441Z" level=info msg="RemovePodSandbox \"ba1e9a0d6564c986eaf20d492f4959d167d01f7a0998c4e9a6256624bd490e52\" returns successfully" Sep 12 17:15:14.212314 containerd[1599]: time="2025-09-12T17:15:14.212104841Z" level=info msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.296 [WARNING][5559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d17caf-52ed-421a-8db2-79e3916a6185", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3", Pod:"calico-apiserver-855468fc96-2tm6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63786f71c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.296 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.296 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" iface="eth0" netns="" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.296 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.296 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.322 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.322 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.322 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.337 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.338 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.340 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.350095 containerd[1599]: 2025-09-12 17:15:14.341 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.350095 containerd[1599]: time="2025-09-12T17:15:14.349898011Z" level=info msg="TearDown network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" successfully" Sep 12 17:15:14.350095 containerd[1599]: time="2025-09-12T17:15:14.349954091Z" level=info msg="StopPodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" returns successfully" Sep 12 17:15:14.361456 containerd[1599]: time="2025-09-12T17:15:14.360975735Z" level=info msg="RemovePodSandbox for \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" Sep 12 17:15:14.361456 containerd[1599]: time="2025-09-12T17:15:14.361042215Z" level=info msg="Forcibly stopping sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\"" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.478 [WARNING][5582] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d17caf-52ed-421a-8db2-79e3916a6185", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"619b68097ce7cba9c7096d0e20a7abfb3cec11241067b50d478da60d815c43b3", Pod:"calico-apiserver-855468fc96-2tm6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63786f71c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.479 [INFO][5582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.480 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" iface="eth0" netns="" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.480 [INFO][5582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.480 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.543 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.546 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.546 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.568 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.568 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" HandleID="k8s-pod-network.4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--2tm6q-eth0" Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.570 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.577869 containerd[1599]: 2025-09-12 17:15:14.575 [INFO][5582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7" Sep 12 17:15:14.580947 containerd[1599]: time="2025-09-12T17:15:14.577814095Z" level=info msg="TearDown network for sandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" successfully" Sep 12 17:15:14.588004 containerd[1599]: time="2025-09-12T17:15:14.587781619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:14.588004 containerd[1599]: time="2025-09-12T17:15:14.587948779Z" level=info msg="RemovePodSandbox \"4ae788d451772e763376b28a6bfbeaeff4d7de05771112d078737d3002e933a7\" returns successfully" Sep 12 17:15:14.590256 containerd[1599]: time="2025-09-12T17:15:14.590128579Z" level=info msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.654 [WARNING][5604] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd730022-9476-4c1a-8998-1f184b2b1808", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5", Pod:"calico-apiserver-855468fc96-dmsh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4944fd4da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.655 [INFO][5604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.655 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" iface="eth0" netns="" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.655 [INFO][5604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.655 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.684 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.684 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.684 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.698 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.698 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.700 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.706851 containerd[1599]: 2025-09-12 17:15:14.702 [INFO][5604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.706851 containerd[1599]: time="2025-09-12T17:15:14.706177982Z" level=info msg="TearDown network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" successfully" Sep 12 17:15:14.706851 containerd[1599]: time="2025-09-12T17:15:14.706204502Z" level=info msg="StopPodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" returns successfully" Sep 12 17:15:14.708428 containerd[1599]: time="2025-09-12T17:15:14.708374263Z" level=info msg="RemovePodSandbox for \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" Sep 12 17:15:14.708428 containerd[1599]: time="2025-09-12T17:15:14.708426263Z" level=info msg="Forcibly stopping sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\"" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.753 [WARNING][5625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0", GenerateName:"calico-apiserver-855468fc96-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd730022-9476-4c1a-8998-1f184b2b1808", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855468fc96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"222a2f809e5381ad6ac7862217fd4260813dfccf3c7d771fcfaffabf893b1af5", Pod:"calico-apiserver-855468fc96-dmsh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4944fd4da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.754 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.754 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" iface="eth0" netns="" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.754 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.754 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.799 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.799 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.799 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.831 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.831 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" HandleID="k8s-pod-network.bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-calico--apiserver--855468fc96--dmsh8-eth0" Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.846 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:14.877598 containerd[1599]: 2025-09-12 17:15:14.861 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c" Sep 12 17:15:14.878174 containerd[1599]: time="2025-09-12T17:15:14.877641965Z" level=info msg="TearDown network for sandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" successfully" Sep 12 17:15:14.897179 containerd[1599]: time="2025-09-12T17:15:14.897117452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:14.897355 containerd[1599]: time="2025-09-12T17:15:14.897217292Z" level=info msg="RemovePodSandbox \"bf445d477a926df8ead0f20df072609f9963b26aa74051f666e19d0cf5386f0c\" returns successfully" Sep 12 17:15:14.898318 containerd[1599]: time="2025-09-12T17:15:14.897877612Z" level=info msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.001 [WARNING][5687] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.001 [INFO][5687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.001 [INFO][5687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" iface="eth0" netns="" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.001 [INFO][5687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.001 [INFO][5687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.030 [INFO][5696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.030 [INFO][5696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.031 [INFO][5696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.041 [WARNING][5696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.041 [INFO][5696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.044 [INFO][5696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.048405 containerd[1599]: 2025-09-12 17:15:15.046 [INFO][5687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.050451 containerd[1599]: time="2025-09-12T17:15:15.048604422Z" level=info msg="TearDown network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" successfully" Sep 12 17:15:15.050451 containerd[1599]: time="2025-09-12T17:15:15.048946063Z" level=info msg="StopPodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" returns successfully" Sep 12 17:15:15.050945 containerd[1599]: time="2025-09-12T17:15:15.050692388Z" level=info msg="RemovePodSandbox for \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" Sep 12 17:15:15.050945 containerd[1599]: time="2025-09-12T17:15:15.050825749Z" level=info msg="Forcibly stopping sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\"" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.099 [WARNING][5710] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" WorkloadEndpoint="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.100 [INFO][5710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.100 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" iface="eth0" netns="" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.100 [INFO][5710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.100 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.131 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.131 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.131 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.144 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.145 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" HandleID="k8s-pod-network.308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-whisker--75d9845454--rbgv5-eth0" Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.147 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.151207 containerd[1599]: 2025-09-12 17:15:15.149 [INFO][5710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8" Sep 12 17:15:15.153079 containerd[1599]: time="2025-09-12T17:15:15.152024355Z" level=info msg="TearDown network for sandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" successfully" Sep 12 17:15:15.157725 containerd[1599]: time="2025-09-12T17:15:15.157682731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:15.158113 containerd[1599]: time="2025-09-12T17:15:15.157962211Z" level=info msg="RemovePodSandbox \"308ed674f955de83ba248bb0fa967769bd92eee88b1ebce7e33954fb52df0cd8\" returns successfully" Sep 12 17:15:15.158645 containerd[1599]: time="2025-09-12T17:15:15.158622053Z" level=info msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.211 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eea0a07-67ac-4783-b82b-7eb9a72b754d", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619", Pod:"csi-node-driver-62n5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali20a19bd2c97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.212 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.212 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" iface="eth0" netns="" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.212 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.212 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.251 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.251 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.251 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.262 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.263 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.266 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.270607 containerd[1599]: 2025-09-12 17:15:15.268 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.272126 containerd[1599]: time="2025-09-12T17:15:15.271335772Z" level=info msg="TearDown network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" successfully" Sep 12 17:15:15.272126 containerd[1599]: time="2025-09-12T17:15:15.271370772Z" level=info msg="StopPodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" returns successfully" Sep 12 17:15:15.273860 containerd[1599]: time="2025-09-12T17:15:15.273132337Z" level=info msg="RemovePodSandbox for \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" Sep 12 17:15:15.273860 containerd[1599]: time="2025-09-12T17:15:15.273174697Z" level=info msg="Forcibly stopping sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\"" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.336 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eea0a07-67ac-4783-b82b-7eb9a72b754d", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"7f8ab5cb18c58ab9eae9ea32320ca39aea8706b7e3e1f1b65668a44c1bbd7619", Pod:"csi-node-driver-62n5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali20a19bd2c97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.339 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.339 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" iface="eth0" netns="" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.339 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.339 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.372 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.372 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.372 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.384 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.384 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" HandleID="k8s-pod-network.bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-csi--node--driver--62n5m-eth0" Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.387 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.394023 containerd[1599]: 2025-09-12 17:15:15.390 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332" Sep 12 17:15:15.395991 containerd[1599]: time="2025-09-12T17:15:15.394070478Z" level=info msg="TearDown network for sandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" successfully" Sep 12 17:15:15.400236 containerd[1599]: time="2025-09-12T17:15:15.400159536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:15.400236 containerd[1599]: time="2025-09-12T17:15:15.400251456Z" level=info msg="RemovePodSandbox \"bbdfa9b39aff191f27dfa53d8529f6f1f589919fdbb48b6208068cc8d2a01332\" returns successfully" Sep 12 17:15:15.403046 containerd[1599]: time="2025-09-12T17:15:15.402548902Z" level=info msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.480 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94", Pod:"goldmane-7988f88666-q6k4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14969e26624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.482 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.482 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" iface="eth0" netns="" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.482 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.482 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.515 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.515 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.515 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.541 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.541 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.550 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.557636 containerd[1599]: 2025-09-12 17:15:15.553 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.561480 containerd[1599]: time="2025-09-12T17:15:15.558954464Z" level=info msg="TearDown network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" successfully" Sep 12 17:15:15.561480 containerd[1599]: time="2025-09-12T17:15:15.558999224Z" level=info msg="StopPodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" returns successfully" Sep 12 17:15:15.561480 containerd[1599]: time="2025-09-12T17:15:15.560312108Z" level=info msg="RemovePodSandbox for \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" Sep 12 17:15:15.561480 containerd[1599]: time="2025-09-12T17:15:15.560350868Z" level=info msg="Forcibly stopping sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\"" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.670 [WARNING][5798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"64c9fa9c-57aa-4d1b-8020-d4fe7290e5b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-0-ae88ce84d6", ContainerID:"d51609fddaeb560213f3d9ecd8604cc2e62b916d094e9821d9128dfc3953ba94", Pod:"goldmane-7988f88666-q6k4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14969e26624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.670 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.670 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" iface="eth0" netns="" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.670 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.670 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.709 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.709 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.709 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.727 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.727 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" HandleID="k8s-pod-network.200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Workload="ci--4081--3--6--0--ae88ce84d6-k8s-goldmane--7988f88666--q6k4l-eth0" Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.730 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:15:15.741133 containerd[1599]: 2025-09-12 17:15:15.736 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e" Sep 12 17:15:15.741528 containerd[1599]: time="2025-09-12T17:15:15.741182179Z" level=info msg="TearDown network for sandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" successfully" Sep 12 17:15:15.751858 containerd[1599]: time="2025-09-12T17:15:15.751770289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:15.752041 containerd[1599]: time="2025-09-12T17:15:15.751882929Z" level=info msg="RemovePodSandbox \"200281a1ea2d809392d8db254f2c93eab1de1c61ec4c0bd5b627aaed4cbe0c5e\" returns successfully" Sep 12 17:16:13.093574 systemd[1]: run-containerd-runc-k8s.io-090b9f6faa18588856acfde44656417ee926086f8e05014784db1c5426fe6463-runc.OFNu3d.mount: Deactivated successfully. Sep 12 17:16:51.025242 systemd[1]: Started sshd@7-5.75.227.222:22-139.178.89.65:45890.service - OpenSSH per-connection server daemon (139.178.89.65:45890). Sep 12 17:16:52.030500 sshd[6115]: Accepted publickey for core from 139.178.89.65 port 45890 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:16:52.034404 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:52.044256 systemd-logind[1578]: New session 8 of user core. Sep 12 17:16:52.049487 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:16:52.830168 sshd[6115]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:52.838207 systemd[1]: sshd@7-5.75.227.222:22-139.178.89.65:45890.service: Deactivated successfully. Sep 12 17:16:52.844972 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:16:52.845939 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:16:52.850611 systemd-logind[1578]: Removed session 8. Sep 12 17:16:57.949517 systemd[1]: run-containerd-runc-k8s.io-31990c4c3b660700a5e0bfd43c43cd45b56a5bf7bbcb00565fe39143aa294d5b-runc.qpUp7Q.mount: Deactivated successfully. Sep 12 17:16:58.003751 systemd[1]: Started sshd@8-5.75.227.222:22-139.178.89.65:45892.service - OpenSSH per-connection server daemon (139.178.89.65:45892). Sep 12 17:16:59.013348 sshd[6151]: Accepted publickey for core from 139.178.89.65 port 45892 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:16:59.015578 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:59.021334 systemd-logind[1578]: New session 9 of user core. Sep 12 17:16:59.026647 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:16:59.773342 sshd[6151]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:59.778945 systemd[1]: sshd@8-5.75.227.222:22-139.178.89.65:45892.service: Deactivated successfully. Sep 12 17:16:59.784270 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:16:59.785824 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:16:59.787254 systemd-logind[1578]: Removed session 9. Sep 12 17:17:04.941227 systemd[1]: Started sshd@9-5.75.227.222:22-139.178.89.65:47302.service - OpenSSH per-connection server daemon (139.178.89.65:47302). Sep 12 17:17:05.937478 sshd[6167]: Accepted publickey for core from 139.178.89.65 port 47302 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:05.939917 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:05.946097 systemd-logind[1578]: New session 10 of user core. Sep 12 17:17:05.953692 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:17:06.701425 sshd[6167]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:06.707942 systemd[1]: sshd@9-5.75.227.222:22-139.178.89.65:47302.service: Deactivated successfully. Sep 12 17:17:06.712171 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:17:06.713092 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:17:06.714012 systemd-logind[1578]: Removed session 10. Sep 12 17:17:06.869720 systemd[1]: Started sshd@10-5.75.227.222:22-139.178.89.65:47306.service - OpenSSH per-connection server daemon (139.178.89.65:47306). Sep 12 17:17:07.873164 sshd[6186]: Accepted publickey for core from 139.178.89.65 port 47306 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:07.876455 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:07.881970 systemd-logind[1578]: New session 11 of user core. Sep 12 17:17:07.886228 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:17:08.672975 sshd[6186]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:08.677232 systemd[1]: sshd@10-5.75.227.222:22-139.178.89.65:47306.service: Deactivated successfully. Sep 12 17:17:08.683603 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:17:08.684932 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:17:08.686390 systemd-logind[1578]: Removed session 11. Sep 12 17:17:08.837370 systemd[1]: Started sshd@11-5.75.227.222:22-139.178.89.65:47312.service - OpenSSH per-connection server daemon (139.178.89.65:47312). Sep 12 17:17:09.829781 sshd[6199]: Accepted publickey for core from 139.178.89.65 port 47312 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:09.832108 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:09.839765 systemd-logind[1578]: New session 12 of user core. Sep 12 17:17:09.845176 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:17:10.590184 sshd[6199]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:10.596277 systemd[1]: sshd@11-5.75.227.222:22-139.178.89.65:47312.service: Deactivated successfully. Sep 12 17:17:10.599816 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:17:10.600279 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:17:10.601607 systemd-logind[1578]: Removed session 12. Sep 12 17:17:15.757452 systemd[1]: Started sshd@12-5.75.227.222:22-139.178.89.65:53384.service - OpenSSH per-connection server daemon (139.178.89.65:53384). Sep 12 17:17:16.751218 sshd[6270]: Accepted publickey for core from 139.178.89.65 port 53384 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:16.753265 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:16.759227 systemd-logind[1578]: New session 13 of user core. Sep 12 17:17:16.767421 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:17:17.517647 sshd[6270]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:17.522631 systemd[1]: sshd@12-5.75.227.222:22-139.178.89.65:53384.service: Deactivated successfully. Sep 12 17:17:17.528868 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:17:17.529561 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:17:17.532519 systemd-logind[1578]: Removed session 13. Sep 12 17:17:17.691374 systemd[1]: Started sshd@13-5.75.227.222:22-139.178.89.65:53398.service - OpenSSH per-connection server daemon (139.178.89.65:53398). Sep 12 17:17:18.672300 sshd[6284]: Accepted publickey for core from 139.178.89.65 port 53398 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:18.674943 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:18.681599 systemd-logind[1578]: New session 14 of user core. Sep 12 17:17:18.686438 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:17:19.608221 sshd[6284]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:19.615488 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:17:19.616269 systemd[1]: sshd@13-5.75.227.222:22-139.178.89.65:53398.service: Deactivated successfully. Sep 12 17:17:19.618889 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:17:19.620550 systemd-logind[1578]: Removed session 14. Sep 12 17:17:19.780328 systemd[1]: Started sshd@14-5.75.227.222:22-139.178.89.65:53410.service - OpenSSH per-connection server daemon (139.178.89.65:53410). Sep 12 17:17:20.773855 sshd[6295]: Accepted publickey for core from 139.178.89.65 port 53410 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:20.776147 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:20.782047 systemd-logind[1578]: New session 15 of user core. Sep 12 17:17:20.795424 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:17:23.317622 sshd[6295]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:23.321598 systemd[1]: sshd@14-5.75.227.222:22-139.178.89.65:53410.service: Deactivated successfully. Sep 12 17:17:23.321966 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:17:23.328365 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:17:23.329819 systemd-logind[1578]: Removed session 15. Sep 12 17:17:23.494380 systemd[1]: Started sshd@15-5.75.227.222:22-139.178.89.65:54828.service - OpenSSH per-connection server daemon (139.178.89.65:54828). Sep 12 17:17:24.520375 sshd[6316]: Accepted publickey for core from 139.178.89.65 port 54828 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:24.522024 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:24.531759 systemd-logind[1578]: New session 16 of user core. Sep 12 17:17:24.541146 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:17:25.520085 sshd[6316]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:25.524177 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:17:25.525818 systemd[1]: sshd@15-5.75.227.222:22-139.178.89.65:54828.service: Deactivated successfully. Sep 12 17:17:25.529449 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:17:25.532337 systemd-logind[1578]: Removed session 16. Sep 12 17:17:25.686714 systemd[1]: Started sshd@16-5.75.227.222:22-139.178.89.65:54838.service - OpenSSH per-connection server daemon (139.178.89.65:54838). Sep 12 17:17:26.676684 sshd[6328]: Accepted publickey for core from 139.178.89.65 port 54838 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:26.679269 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:26.689562 systemd-logind[1578]: New session 17 of user core. Sep 12 17:17:26.696673 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:17:27.440482 sshd[6328]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:27.444736 systemd[1]: sshd@16-5.75.227.222:22-139.178.89.65:54838.service: Deactivated successfully. Sep 12 17:17:27.449219 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:17:27.450532 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:17:27.451578 systemd-logind[1578]: Removed session 17. Sep 12 17:17:32.609318 systemd[1]: Started sshd@17-5.75.227.222:22-139.178.89.65:59058.service - OpenSSH per-connection server daemon (139.178.89.65:59058). Sep 12 17:17:33.584814 sshd[6368]: Accepted publickey for core from 139.178.89.65 port 59058 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:33.587078 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:33.593069 systemd-logind[1578]: New session 18 of user core. Sep 12 17:17:33.598133 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:17:34.336203 sshd[6368]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:34.341984 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:17:34.343359 systemd[1]: sshd@17-5.75.227.222:22-139.178.89.65:59058.service: Deactivated successfully. Sep 12 17:17:34.347315 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:17:34.348534 systemd-logind[1578]: Removed session 18. Sep 12 17:17:39.512191 systemd[1]: Started sshd@18-5.75.227.222:22-139.178.89.65:59072.service - OpenSSH per-connection server daemon (139.178.89.65:59072). Sep 12 17:17:40.503080 sshd[6388]: Accepted publickey for core from 139.178.89.65 port 59072 ssh2: RSA SHA256:1SAgMiZlCHBr0Vs456OmR0PXIyT7CPtESXBBc/039go Sep 12 17:17:40.505457 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:40.512199 systemd-logind[1578]: New session 19 of user core. Sep 12 17:17:40.516731 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:17:41.280350 sshd[6388]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:41.285771 systemd[1]: sshd@18-5.75.227.222:22-139.178.89.65:59072.service: Deactivated successfully. Sep 12 17:17:41.294155 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:17:41.296232 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:17:41.300952 systemd-logind[1578]: Removed session 19. Sep 12 17:17:56.596383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de-rootfs.mount: Deactivated successfully. Sep 12 17:17:56.598126 containerd[1599]: time="2025-09-12T17:17:56.597038613Z" level=info msg="shim disconnected" id=e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de namespace=k8s.io Sep 12 17:17:56.598126 containerd[1599]: time="2025-09-12T17:17:56.597193495Z" level=warning msg="cleaning up after shim disconnected" id=e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de namespace=k8s.io Sep 12 17:17:56.598126 containerd[1599]: time="2025-09-12T17:17:56.597203175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:56.846914 containerd[1599]: time="2025-09-12T17:17:56.846465836Z" level=info msg="shim disconnected" id=c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5 namespace=k8s.io Sep 12 17:17:56.846914 containerd[1599]: time="2025-09-12T17:17:56.846536157Z" level=warning msg="cleaning up after shim disconnected" id=c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5 namespace=k8s.io Sep 12 17:17:56.846914 containerd[1599]: time="2025-09-12T17:17:56.846545117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:56.849090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5-rootfs.mount: Deactivated successfully. Sep 12 17:17:56.991329 kubelet[2755]: E0912 17:17:56.990236 2755 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39218->10.0.0.2:2379: read: connection timed out" Sep 12 17:17:57.333680 kubelet[2755]: I0912 17:17:57.333479 2755 scope.go:117] "RemoveContainer" containerID="e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de" Sep 12 17:17:57.336290 kubelet[2755]: I0912 17:17:57.336271 2755 scope.go:117] "RemoveContainer" containerID="c0746667f7c1bb873b80c6fd94519744f77d3fd4e96098a9f9601348b0f98ed5" Sep 12 17:17:57.337133 containerd[1599]: time="2025-09-12T17:17:57.336798232Z" level=info msg="CreateContainer within sandbox \"4bdea02d954ad85dd5bcef8b2218d5dda08c358e83ea9ab38bcc6733ae245de3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:17:57.339726 containerd[1599]: time="2025-09-12T17:17:57.339546549Z" level=info msg="CreateContainer within sandbox \"dbe991738d78648634be60976edb45c499853b91f57f622f6e22dcccf325e2d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:17:57.358263 containerd[1599]: time="2025-09-12T17:17:57.358204722Z" level=info msg="CreateContainer within sandbox \"4bdea02d954ad85dd5bcef8b2218d5dda08c358e83ea9ab38bcc6733ae245de3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba\"" Sep 12 17:17:57.360007 containerd[1599]: time="2025-09-12T17:17:57.359984826Z" level=info msg="StartContainer for \"3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba\"" Sep 12 17:17:57.363910 containerd[1599]: time="2025-09-12T17:17:57.363573194Z" level=info msg="CreateContainer within sandbox \"dbe991738d78648634be60976edb45c499853b91f57f622f6e22dcccf325e2d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"391b6e5a12cfe137c289a3092318f8dc7dc48d3c2f6b27aa0b4c9ff2fa0d0105\"" Sep 12 17:17:57.365035 containerd[1599]: time="2025-09-12T17:17:57.364965493Z" level=info msg="StartContainer for \"391b6e5a12cfe137c289a3092318f8dc7dc48d3c2f6b27aa0b4c9ff2fa0d0105\"" Sep 12 17:17:57.440579 containerd[1599]: time="2025-09-12T17:17:57.440518036Z" level=info msg="StartContainer for \"391b6e5a12cfe137c289a3092318f8dc7dc48d3c2f6b27aa0b4c9ff2fa0d0105\" returns successfully" Sep 12 17:17:57.440736 containerd[1599]: time="2025-09-12T17:17:57.440622797Z" level=info msg="StartContainer for \"3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba\" returns successfully" Sep 12 17:17:58.024892 kubelet[2755]: E0912 17:17:58.019688 2755 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39028->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-0-ae88ce84d6.186498897b31b649 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-0-ae88ce84d6,UID:c25d8ab718eeb168bb9c7302b6f298cf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-0-ae88ce84d6,},FirstTimestamp:2025-09-12 17:17:50.826133065 +0000 UTC m=+217.706940591,LastTimestamp:2025-09-12 17:17:50.826133065 +0000 UTC m=+217.706940591,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-0-ae88ce84d6,}" Sep 12 17:18:02.417906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e-rootfs.mount: Deactivated successfully. Sep 12 17:18:02.420192 containerd[1599]: time="2025-09-12T17:18:02.417074747Z" level=info msg="shim disconnected" id=7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e namespace=k8s.io Sep 12 17:18:02.420192 containerd[1599]: time="2025-09-12T17:18:02.418217367Z" level=warning msg="cleaning up after shim disconnected" id=7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e namespace=k8s.io Sep 12 17:18:02.420192 containerd[1599]: time="2025-09-12T17:18:02.418349329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:02.432407 containerd[1599]: time="2025-09-12T17:18:02.432324330Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:18:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:18:03.362462 kubelet[2755]: I0912 17:18:03.362413 2755 scope.go:117] "RemoveContainer" containerID="7fcb5cd9785d2ec3837f0e0b0170a046f1850b5414b6298897a3fc7b118e9e2e" Sep 12 17:18:03.382897 containerd[1599]: time="2025-09-12T17:18:03.382263268Z" level=info msg="CreateContainer within sandbox \"7ad27f7db0a8843a866ef6fb5ee7e001fe46449136eb58625d6418ca9c42595c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:18:03.409339 containerd[1599]: time="2025-09-12T17:18:03.409290072Z" level=info msg="CreateContainer within sandbox \"7ad27f7db0a8843a866ef6fb5ee7e001fe46449136eb58625d6418ca9c42595c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0ffabf9f26e89180ef6694338ffca7fdd46e673c5a9ebc371bd77313f958004b\"" Sep 12 17:18:03.411075 containerd[1599]: time="2025-09-12T17:18:03.410135927Z" level=info msg="StartContainer for \"0ffabf9f26e89180ef6694338ffca7fdd46e673c5a9ebc371bd77313f958004b\"" Sep 12 17:18:03.482463 containerd[1599]: time="2025-09-12T17:18:03.482391782Z" level=info msg="StartContainer for \"0ffabf9f26e89180ef6694338ffca7fdd46e673c5a9ebc371bd77313f958004b\" returns successfully" Sep 12 17:18:06.993813 kubelet[2755]: E0912 17:18:06.993730 2755 controller.go:195] "Failed to update lease" err="Put \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:18:07.805538 kubelet[2755]: I0912 17:18:07.805446 2755 status_manager.go:851] "Failed to get status for pod" podUID="3af485b9-c78a-4eca-b805-e549d767acbe" pod="tigera-operator/tigera-operator-58fc44c59b-ln5c9" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39134->10.0.0.2:2379: read: connection timed out" Sep 12 17:18:08.716012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba-rootfs.mount: Deactivated successfully. Sep 12 17:18:08.721873 containerd[1599]: time="2025-09-12T17:18:08.721593687Z" level=info msg="shim disconnected" id=3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba namespace=k8s.io Sep 12 17:18:08.721873 containerd[1599]: time="2025-09-12T17:18:08.721668288Z" level=warning msg="cleaning up after shim disconnected" id=3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba namespace=k8s.io Sep 12 17:18:08.721873 containerd[1599]: time="2025-09-12T17:18:08.721678128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:09.388221 kubelet[2755]: I0912 17:18:09.388187 2755 scope.go:117] "RemoveContainer" containerID="e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de" Sep 12 17:18:09.388712 kubelet[2755]: I0912 17:18:09.388562 2755 scope.go:117] "RemoveContainer" containerID="3d1bb7fea26a67ca2159545f252d95ce9ee80931d672c3f46c3e058f456b02ba" Sep 12 17:18:09.390045 kubelet[2755]: E0912 17:18:09.389958 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-ln5c9_tigera-operator(3af485b9-c78a-4eca-b805-e549d767acbe)\"" pod="tigera-operator/tigera-operator-58fc44c59b-ln5c9" podUID="3af485b9-c78a-4eca-b805-e549d767acbe" Sep 12 17:18:09.391238 containerd[1599]: time="2025-09-12T17:18:09.391188972Z" level=info msg="RemoveContainer for \"e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de\"" Sep 12 17:18:09.395612 containerd[1599]: time="2025-09-12T17:18:09.395565788Z" level=info msg="RemoveContainer for \"e823edc93c86528457dc532f0473cc25cdf822ab0e64c109b9ae9bb44242c2de\" returns successfully" Sep 12 17:18:16.995878 kubelet[2755]: E0912 17:18:16.994325 2755 controller.go:195] "Failed to update lease" err="Put \"https://5.75.227.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-0-ae88ce84d6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"