Sep 5 23:56:24.878867 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:56:24.878893 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:56:24.878904 kernel: KASLR enabled Sep 5 23:56:24.878910 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 5 23:56:24.878915 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 5 23:56:24.878921 kernel: random: crng init done Sep 5 23:56:24.878928 kernel: ACPI: Early table checksum verification disabled Sep 5 23:56:24.878934 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 5 23:56:24.878940 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:56:24.878948 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878954 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878960 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878966 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878972 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878980 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878987 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.878994 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.879000 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:56:24.879007 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 23:56:24.879013 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 5 23:56:24.879033 kernel: NUMA: Failed to initialise from firmware Sep 5 23:56:24.879040 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:56:24.879047 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 5 23:56:24.879053 kernel: Zone ranges: Sep 5 23:56:24.879059 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:56:24.879068 kernel: DMA32 empty Sep 5 23:56:24.879074 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 5 23:56:24.879080 kernel: Movable zone start for each node Sep 5 23:56:24.879087 kernel: Early memory node ranges Sep 5 23:56:24.879093 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 5 23:56:24.879099 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 5 23:56:24.879106 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 5 23:56:24.879112 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 5 23:56:24.879119 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 5 23:56:24.879125 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 5 23:56:24.879131 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 5 23:56:24.879138 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:56:24.879145 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 5 23:56:24.879152 kernel: psci: probing for conduit method from ACPI. Sep 5 23:56:24.879158 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:56:24.879167 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:56:24.879174 kernel: psci: Trusted OS migration not required Sep 5 23:56:24.879181 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:56:24.879189 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:56:24.879196 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:56:24.879203 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:56:24.879210 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:56:24.879216 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:56:24.879223 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:56:24.879230 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:56:24.879237 kernel: CPU features: detected: Spectre-v4 Sep 5 23:56:24.879243 kernel: CPU features: detected: Spectre-BHB Sep 5 23:56:24.879250 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:56:24.879258 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:56:24.879265 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:56:24.879272 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:56:24.879278 kernel: alternatives: applying boot alternatives Sep 5 23:56:24.879286 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:56:24.879294 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:56:24.879300 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:56:24.879307 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:56:24.879314 kernel: Fallback order for Node 0: 0 Sep 5 23:56:24.879323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 5 23:56:24.879330 kernel: Policy zone: Normal Sep 5 23:56:24.879338 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:56:24.879345 kernel: software IO TLB: area num 2. Sep 5 23:56:24.879352 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 5 23:56:24.879361 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Sep 5 23:56:24.879368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:56:24.879374 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:56:24.879383 kernel: rcu: RCU event tracing is enabled. Sep 5 23:56:24.879389 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:56:24.879396 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:56:24.879403 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:56:24.879410 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:56:24.880461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:56:24.880505 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:56:24.880513 kernel: GICv3: 256 SPIs implemented Sep 5 23:56:24.880521 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:56:24.880528 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:56:24.880535 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:56:24.880542 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:56:24.880549 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:56:24.880557 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:56:24.880564 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:56:24.880572 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 5 23:56:24.880579 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 5 23:56:24.880591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:56:24.880598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:56:24.880605 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:56:24.880612 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:56:24.880619 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:56:24.880626 kernel: Console: colour dummy device 80x25 Sep 5 23:56:24.880633 kernel: ACPI: Core revision 20230628 Sep 5 23:56:24.880641 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:56:24.880648 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:56:24.880655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:56:24.880664 kernel: landlock: Up and running. Sep 5 23:56:24.880671 kernel: SELinux: Initializing. Sep 5 23:56:24.880678 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:56:24.880685 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:56:24.880692 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:56:24.880700 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:56:24.880707 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:56:24.880714 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:56:24.880721 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:56:24.880729 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:56:24.880736 kernel: Remapping and enabling EFI services. Sep 5 23:56:24.880744 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:56:24.880750 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:56:24.880758 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:56:24.880765 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 5 23:56:24.880772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:56:24.880779 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:56:24.880786 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:56:24.880793 kernel: SMP: Total of 2 processors activated. Sep 5 23:56:24.880801 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:56:24.880809 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:56:24.880821 kernel: CPU features: detected: Common not Private translations Sep 5 23:56:24.880831 kernel: CPU features: detected: CRC32 instructions Sep 5 23:56:24.880838 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:56:24.880845 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:56:24.880852 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:56:24.880860 kernel: CPU features: detected: Privileged Access Never Sep 5 23:56:24.880867 kernel: CPU features: detected: RAS Extension Support Sep 5 23:56:24.880877 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:56:24.880884 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:56:24.880892 kernel: alternatives: applying system-wide alternatives Sep 5 23:56:24.880899 kernel: devtmpfs: initialized Sep 5 23:56:24.880907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:56:24.880914 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:56:24.880922 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:56:24.880930 kernel: SMBIOS 3.0.0 present. Sep 5 23:56:24.880938 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 5 23:56:24.880946 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:56:24.880954 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:56:24.880961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:56:24.880969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:56:24.880976 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:56:24.880983 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Sep 5 23:56:24.880991 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:56:24.881000 kernel: cpuidle: using governor menu Sep 5 23:56:24.881008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:56:24.881015 kernel: ASID allocator initialised with 32768 entries Sep 5 23:56:24.881035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:56:24.881043 kernel: Serial: AMBA PL011 UART driver Sep 5 23:56:24.881051 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:56:24.881058 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:56:24.881066 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:56:24.881073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:56:24.881083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:56:24.881090 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:56:24.881098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:56:24.881105 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:56:24.881113 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:56:24.881120 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:56:24.881128 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:56:24.881135 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:56:24.881143 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:56:24.881152 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:56:24.881159 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:56:24.881166 kernel: ACPI: Interpreter enabled Sep 5 23:56:24.881173 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:56:24.881181 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:56:24.881188 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:56:24.881196 kernel: printk: console [ttyAMA0] enabled Sep 5 23:56:24.881204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:56:24.881363 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:56:24.883584 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:56:24.883685 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:56:24.883752 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:56:24.883816 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:56:24.883826 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:56:24.883835 kernel: PCI host bridge to bus 0000:00 Sep 5 23:56:24.883911 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:56:24.883983 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:56:24.884063 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:56:24.884131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:56:24.884222 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:56:24.885538 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 5 23:56:24.885678 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 5 23:56:24.885762 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:56:24.885844 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.885913 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 5 23:56:24.886073 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.886171 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 5 23:56:24.886260 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.888545 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 5 23:56:24.888760 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.888858 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 5 23:56:24.888948 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.889039 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 5 23:56:24.889121 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.889200 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 5 23:56:24.889284 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.889374 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 5 23:56:24.889478 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.889563 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 5 23:56:24.889640 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:56:24.889715 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 5 23:56:24.889814 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 5 23:56:24.889882 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 5 23:56:24.889958 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:56:24.890072 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 5 23:56:24.890151 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:56:24.890218 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:56:24.890299 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 5 23:56:24.890368 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 5 23:56:24.893579 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 5 23:56:24.893700 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 5 23:56:24.893775 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 5 23:56:24.893855 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 5 23:56:24.893925 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 5 23:56:24.894012 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 5 23:56:24.894142 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 5 23:56:24.894276 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 5 23:56:24.894352 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 5 23:56:24.894463 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:56:24.894552 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:56:24.894629 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 5 23:56:24.894696 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 5 23:56:24.894764 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:56:24.894835 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 5 23:56:24.894901 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:56:24.894968 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:56:24.895062 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 5 23:56:24.895132 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 5 23:56:24.895219 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 5 23:56:24.895295 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 23:56:24.895378 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:56:24.895485 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:56:24.895559 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 23:56:24.895625 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 5 23:56:24.895695 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 5 23:56:24.895765 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 23:56:24.895830 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 5 23:56:24.895896 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 5 23:56:24.895966 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 23:56:24.896052 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:56:24.896128 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:56:24.896210 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 23:56:24.896279 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:56:24.896344 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:56:24.896414 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 23:56:24.897675 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:56:24.897745 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:56:24.897816 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 23:56:24.897880 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:56:24.897951 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:56:24.898034 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 5 23:56:24.898107 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:56:24.898177 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 5 23:56:24.898243 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:56:24.898312 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 5 23:56:24.898378 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:56:24.898487 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 5 23:56:24.898556 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:56:24.898625 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 5 23:56:24.898692 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:56:24.898760 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 5 23:56:24.898825 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:56:24.898897 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 5 23:56:24.898962 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:56:24.899043 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 5 23:56:24.899115 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:56:24.899181 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 5 23:56:24.899247 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:56:24.899315 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 5 23:56:24.899385 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 5 23:56:24.899686 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 5 23:56:24.899762 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 5 23:56:24.899827 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 5 23:56:24.899890 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 5 23:56:24.899955 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 5 23:56:24.900063 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 5 23:56:24.900147 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 5 23:56:24.900220 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 5 23:56:24.900289 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 5 23:56:24.900353 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 5 23:56:24.901974 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 5 23:56:24.902174 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 5 23:56:24.902253 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 5 23:56:24.902320 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 5 23:56:24.902387 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 5 23:56:24.902614 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 5 23:56:24.902689 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 5 23:56:24.902753 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 5 23:56:24.902822 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 5 23:56:24.902896 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 5 23:56:24.904606 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:56:24.904700 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 5 23:56:24.904773 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 5 23:56:24.904848 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 23:56:24.904915 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 5 23:56:24.904985 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:56:24.905124 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 5 23:56:24.905200 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 5 23:56:24.905273 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 23:56:24.905340 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 5 23:56:24.905404 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:56:24.906184 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:56:24.906266 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 5 23:56:24.906335 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 5 23:56:24.906400 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 23:56:24.906502 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 5 23:56:24.906571 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:56:24.906645 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:56:24.906715 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 5 23:56:24.906789 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 23:56:24.906853 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 5 23:56:24.906919 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:56:24.906994 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 5 23:56:24.907090 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 5 23:56:24.907165 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 23:56:24.907245 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 5 23:56:24.907314 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:56:24.907386 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 5 23:56:24.909551 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 5 23:56:24.909645 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 5 23:56:24.909714 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 23:56:24.909786 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 5 23:56:24.909853 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:56:24.909937 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 5 23:56:24.910006 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 5 23:56:24.910137 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 5 23:56:24.910211 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 5 23:56:24.910278 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 23:56:24.910343 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 5 23:56:24.910413 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:56:24.910943 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 5 23:56:24.911011 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 23:56:24.911104 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 5 23:56:24.911169 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:56:24.911236 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 5 23:56:24.911299 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 5 23:56:24.911364 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 5 23:56:24.911481 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:56:24.911554 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:56:24.911612 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:56:24.911671 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:56:24.911751 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 23:56:24.911814 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 5 23:56:24.911873 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:56:24.911944 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 5 23:56:24.912004 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 5 23:56:24.912085 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:56:24.912157 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 5 23:56:24.912218 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 5 23:56:24.912278 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:56:24.912345 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 23:56:24.912408 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 5 23:56:24.912527 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:56:24.912607 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 5 23:56:24.912671 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 5 23:56:24.912731 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:56:24.912796 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 5 23:56:24.912858 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 5 23:56:24.912934 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:56:24.913004 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 5 23:56:24.913166 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 5 23:56:24.913245 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:56:24.913320 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 5 23:56:24.913386 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 5 23:56:24.913540 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:56:24.913621 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 5 23:56:24.913684 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 5 23:56:24.913746 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:56:24.913761 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:56:24.913769 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:56:24.913778 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:56:24.913786 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:56:24.913794 kernel: iommu: Default domain type: Translated Sep 5 23:56:24.913802 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:56:24.913810 kernel: efivars: Registered efivars operations Sep 5 23:56:24.913818 kernel: vgaarb: loaded Sep 5 23:56:24.913826 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:56:24.913837 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:56:24.913846 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:56:24.913853 kernel: pnp: PnP ACPI init Sep 5 23:56:24.913936 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:56:24.913948 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:56:24.913956 kernel: NET: Registered PF_INET protocol family Sep 5 23:56:24.913965 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:56:24.913973 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:56:24.913983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:56:24.913992 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:56:24.914000 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:56:24.914008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:56:24.914046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:56:24.914055 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:56:24.914064 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:56:24.914166 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 5 23:56:24.914180 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:56:24.914192 kernel: kvm [1]: HYP mode not available Sep 5 23:56:24.914201 kernel: Initialise system trusted keyrings Sep 5 23:56:24.914209 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:56:24.914217 kernel: Key type asymmetric registered Sep 5 23:56:24.914225 kernel: Asymmetric key parser 'x509' registered Sep 5 23:56:24.914233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:56:24.914241 kernel: io scheduler mq-deadline registered Sep 5 23:56:24.914249 kernel: io scheduler kyber registered Sep 5 23:56:24.914257 kernel: io scheduler bfq registered Sep 5 23:56:24.914268 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:56:24.914344 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 5 23:56:24.914413 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 5 23:56:24.914584 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.914657 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 5 23:56:24.914724 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 5 23:56:24.914791 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.914884 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 5 23:56:24.914955 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 5 23:56:24.915068 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.915153 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 5 23:56:24.915222 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 5 23:56:24.915290 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.915363 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 5 23:56:24.915454 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 5 23:56:24.915593 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.915689 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 5 23:56:24.915761 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 5 23:56:24.915829 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.915908 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 5 23:56:24.915979 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 5 23:56:24.916069 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.916146 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 5 23:56:24.916215 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 5 23:56:24.916284 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.916299 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 5 23:56:24.916369 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 5 23:56:24.916458 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 5 23:56:24.916530 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:56:24.916541 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:56:24.916550 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:56:24.916558 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:56:24.916637 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 5 23:56:24.916715 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 5 23:56:24.916726 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:56:24.916734 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:56:24.916804 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 5 23:56:24.916816 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 5 23:56:24.916824 kernel: thunder_xcv, ver 1.0 Sep 5 23:56:24.916832 kernel: thunder_bgx, ver 1.0 Sep 5 23:56:24.916840 kernel: nicpf, ver 1.0 Sep 5 23:56:24.916852 kernel: nicvf, ver 1.0 Sep 5 23:56:24.916937 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:56:24.917015 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:56:24 UTC (1757116584) Sep 5 23:56:24.917075 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:56:24.917083 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:56:24.917092 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:56:24.917102 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:56:24.917115 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:56:24.917123 kernel: Segment Routing with IPv6 Sep 5 23:56:24.917132 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:56:24.917140 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:56:24.917148 kernel: Key type dns_resolver registered Sep 5 23:56:24.917156 kernel: registered taskstats version 1 Sep 5 23:56:24.917164 kernel: Loading compiled-in X.509 certificates Sep 5 23:56:24.917172 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:56:24.917180 kernel: Key type .fscrypt registered Sep 5 23:56:24.917188 kernel: Key type fscrypt-provisioning registered Sep 5 23:56:24.917198 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:56:24.917206 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:56:24.917214 kernel: ima: No architecture policies found Sep 5 23:56:24.917222 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:56:24.917231 kernel: clk: Disabling unused clocks Sep 5 23:56:24.917239 kernel: Freeing unused kernel memory: 39424K Sep 5 23:56:24.917247 kernel: Run /init as init process Sep 5 23:56:24.917255 kernel: with arguments: Sep 5 23:56:24.917265 kernel: /init Sep 5 23:56:24.917273 kernel: with environment: Sep 5 23:56:24.917281 kernel: HOME=/ Sep 5 23:56:24.917289 kernel: TERM=linux Sep 5 23:56:24.917297 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:56:24.917307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:56:24.917318 systemd[1]: Detected virtualization kvm. Sep 5 23:56:24.917327 systemd[1]: Detected architecture arm64. Sep 5 23:56:24.917338 systemd[1]: Running in initrd. Sep 5 23:56:24.917347 systemd[1]: No hostname configured, using default hostname. Sep 5 23:56:24.917356 systemd[1]: Hostname set to . Sep 5 23:56:24.917365 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:56:24.917374 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:56:24.917383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:56:24.917393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:56:24.917403 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:56:24.917417 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:56:24.917442 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:56:24.917451 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:56:24.917479 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:56:24.917489 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:56:24.917498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:56:24.917507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:56:24.917520 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:56:24.917532 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:56:24.917542 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:56:24.917551 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:56:24.917560 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:56:24.917570 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:56:24.917580 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:56:24.917589 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:56:24.917598 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:56:24.917609 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:56:24.917618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:56:24.917630 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:56:24.917639 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:56:24.917648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:56:24.917657 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:56:24.917667 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:56:24.917677 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:56:24.917687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:56:24.917697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:56:24.917706 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:56:24.917745 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 23:56:24.917768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:56:24.917777 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:56:24.917788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:56:24.917797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:56:24.917807 kernel: Bridge firewalling registered Sep 5 23:56:24.917816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:24.917826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:56:24.917835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:56:24.917845 systemd-journald[237]: Journal started Sep 5 23:56:24.917864 systemd-journald[237]: Runtime Journal (/run/log/journal/e7f7eff60d944020ad8d532dbc248d17) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:56:24.888823 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 23:56:24.921858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:56:24.910651 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 23:56:24.928872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:56:24.931618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:56:24.933670 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:56:24.945897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:56:24.955696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:56:24.959594 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:56:24.962328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:56:24.970685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:56:24.971595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:56:24.975326 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:56:24.986709 dracut-cmdline[271]: dracut-dracut-053 Sep 5 23:56:24.990507 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:56:25.011150 systemd-resolved[273]: Positive Trust Anchors: Sep 5 23:56:25.011171 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:56:25.011203 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:56:25.017595 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 5 23:56:25.018668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:56:25.019360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:56:25.089485 kernel: SCSI subsystem initialized Sep 5 23:56:25.093465 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:56:25.101484 kernel: iscsi: registered transport (tcp) Sep 5 23:56:25.115497 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:56:25.115580 kernel: QLogic iSCSI HBA Driver Sep 5 23:56:25.163604 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:56:25.172693 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:56:25.190791 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:56:25.190864 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:56:25.191434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:56:25.244491 kernel: raid6: neonx8 gen() 15678 MB/s Sep 5 23:56:25.261491 kernel: raid6: neonx4 gen() 15540 MB/s Sep 5 23:56:25.278467 kernel: raid6: neonx2 gen() 13160 MB/s Sep 5 23:56:25.295503 kernel: raid6: neonx1 gen() 10365 MB/s Sep 5 23:56:25.312474 kernel: raid6: int64x8 gen() 6916 MB/s Sep 5 23:56:25.329496 kernel: raid6: int64x4 gen() 7306 MB/s Sep 5 23:56:25.346482 kernel: raid6: int64x2 gen() 6092 MB/s Sep 5 23:56:25.364246 kernel: raid6: int64x1 gen() 5020 MB/s Sep 5 23:56:25.364316 kernel: raid6: using algorithm neonx8 gen() 15678 MB/s Sep 5 23:56:25.380512 kernel: raid6: .... xor() 11747 MB/s, rmw enabled Sep 5 23:56:25.380595 kernel: raid6: using neon recovery algorithm Sep 5 23:56:25.385635 kernel: xor: measuring software checksum speed Sep 5 23:56:25.385709 kernel: 8regs : 19726 MB/sec Sep 5 23:56:25.387314 kernel: 32regs : 19664 MB/sec Sep 5 23:56:25.387384 kernel: arm64_neon : 25488 MB/sec Sep 5 23:56:25.387402 kernel: xor: using function: arm64_neon (25488 MB/sec) Sep 5 23:56:25.440055 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:56:25.454325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:56:25.460641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:56:25.487249 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 5 23:56:25.490794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:56:25.497598 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:56:25.523808 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 5 23:56:25.563278 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:56:25.571856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:56:25.624465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:56:25.630932 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:56:25.652975 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:56:25.656949 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:56:25.658534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:56:25.659655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:56:25.669604 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:56:25.687696 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:56:25.735939 kernel: ACPI: bus type USB registered Sep 5 23:56:25.736030 kernel: usbcore: registered new interface driver usbfs Sep 5 23:56:25.736044 kernel: scsi host0: Virtio SCSI HBA Sep 5 23:56:25.741575 kernel: usbcore: registered new interface driver hub Sep 5 23:56:25.741644 kernel: usbcore: registered new device driver usb Sep 5 23:56:25.743455 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 23:56:25.745678 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 5 23:56:25.745788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:56:25.745913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:56:25.748404 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:56:25.749814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:56:25.751140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:25.755339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:56:25.766746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:56:25.777478 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:56:25.777718 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 5 23:56:25.781445 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 23:56:25.782849 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:56:25.783056 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 5 23:56:25.783628 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 5 23:56:25.789546 kernel: hub 1-0:1.0: USB hub found Sep 5 23:56:25.789782 kernel: hub 1-0:1.0: 4 ports detected Sep 5 23:56:25.792452 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 23:56:25.794114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:25.796644 kernel: hub 2-0:1.0: USB hub found Sep 5 23:56:25.796836 kernel: hub 2-0:1.0: 4 ports detected Sep 5 23:56:25.809162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:56:25.812523 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 5 23:56:25.817578 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 5 23:56:25.817848 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:56:25.820444 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 5 23:56:25.824776 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 5 23:56:25.827579 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 5 23:56:25.829463 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 5 23:56:25.830272 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 5 23:56:25.830376 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 23:56:25.834168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:56:25.837602 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:56:25.837649 kernel: GPT:17805311 != 80003071 Sep 5 23:56:25.839698 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:56:25.839759 kernel: GPT:17805311 != 80003071 Sep 5 23:56:25.839782 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:56:25.839803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:56:25.842439 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 5 23:56:25.878964 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (516) Sep 5 23:56:25.882460 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (513) Sep 5 23:56:25.892851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 5 23:56:25.901390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 5 23:56:25.907760 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 5 23:56:25.908747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 5 23:56:25.916052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:56:25.925605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:56:25.935480 disk-uuid[571]: Primary Header is updated. Sep 5 23:56:25.935480 disk-uuid[571]: Secondary Entries is updated. Sep 5 23:56:25.935480 disk-uuid[571]: Secondary Header is updated. Sep 5 23:56:25.942592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:56:25.950230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:56:25.953461 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:56:26.031457 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 23:56:26.167046 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 5 23:56:26.167120 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 5 23:56:26.167389 kernel: usbcore: registered new interface driver usbhid Sep 5 23:56:26.167409 kernel: usbhid: USB HID core driver Sep 5 23:56:26.276537 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 5 23:56:26.404527 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 5 23:56:26.458528 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 5 23:56:26.955094 disk-uuid[573]: The operation has completed successfully. Sep 5 23:56:26.956629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:56:27.007287 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:56:27.008086 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:56:27.024713 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:56:27.030779 sh[590]: Success Sep 5 23:56:27.044616 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:56:27.106962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:56:27.120670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:56:27.122617 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:56:27.140734 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:56:27.140808 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:56:27.140827 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:56:27.140854 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:56:27.141571 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:56:27.150466 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:56:27.153052 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:56:27.154593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:56:27.160750 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:56:27.166640 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:56:27.176448 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:56:27.176510 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:56:27.176523 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:56:27.183159 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:56:27.183235 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:56:27.196981 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:56:27.198837 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:56:27.206049 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:56:27.213902 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:56:27.302481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:56:27.312835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:56:27.322802 ignition[672]: Ignition 2.19.0 Sep 5 23:56:27.323024 ignition[672]: Stage: fetch-offline Sep 5 23:56:27.323074 ignition[672]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:27.323085 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:27.323285 ignition[672]: parsed url from cmdline: "" Sep 5 23:56:27.323290 ignition[672]: no config URL provided Sep 5 23:56:27.323294 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:56:27.323301 ignition[672]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:56:27.323307 ignition[672]: failed to fetch config: resource requires networking Sep 5 23:56:27.329715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:56:27.323540 ignition[672]: Ignition finished successfully Sep 5 23:56:27.349133 systemd-networkd[777]: lo: Link UP Sep 5 23:56:27.349146 systemd-networkd[777]: lo: Gained carrier Sep 5 23:56:27.350881 systemd-networkd[777]: Enumeration completed Sep 5 23:56:27.351153 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:56:27.351656 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:27.351660 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:56:27.352907 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:27.352910 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:56:27.353560 systemd-networkd[777]: eth0: Link UP Sep 5 23:56:27.353563 systemd-networkd[777]: eth0: Gained carrier Sep 5 23:56:27.353570 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:27.354549 systemd[1]: Reached target network.target - Network. Sep 5 23:56:27.360620 systemd-networkd[777]: eth1: Link UP Sep 5 23:56:27.360625 systemd-networkd[777]: eth1: Gained carrier Sep 5 23:56:27.360638 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:27.363734 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:56:27.383520 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:56:27.393063 ignition[780]: Ignition 2.19.0 Sep 5 23:56:27.393080 ignition[780]: Stage: fetch Sep 5 23:56:27.393287 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:27.393298 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:27.393393 ignition[780]: parsed url from cmdline: "" Sep 5 23:56:27.393397 ignition[780]: no config URL provided Sep 5 23:56:27.393401 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:56:27.393408 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:56:27.393457 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 5 23:56:27.394075 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 5 23:56:27.413566 systemd-networkd[777]: eth0: DHCPv4 address 138.199.175.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:56:27.594336 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 5 23:56:27.599930 ignition[780]: GET result: OK Sep 5 23:56:27.600079 ignition[780]: parsing config with SHA512: 28d8d0413086cb49077e3e3885c1730a54da5f9edc098474a12a86d1901415572007b2eb8b86af3454b4f25088d1fe2908e88a23bcbe0f80d20189d515b83573 Sep 5 23:56:27.605088 unknown[780]: fetched base config from "system" Sep 5 23:56:27.605102 unknown[780]: fetched base config from "system" Sep 5 23:56:27.605504 ignition[780]: fetch: fetch complete Sep 5 23:56:27.605107 unknown[780]: fetched user config from "hetzner" Sep 5 23:56:27.605511 ignition[780]: fetch: fetch passed Sep 5 23:56:27.607900 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:56:27.605560 ignition[780]: Ignition finished successfully Sep 5 23:56:27.620769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:56:27.635493 ignition[788]: Ignition 2.19.0 Sep 5 23:56:27.635503 ignition[788]: Stage: kargs Sep 5 23:56:27.635670 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:27.635680 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:27.636735 ignition[788]: kargs: kargs passed Sep 5 23:56:27.636793 ignition[788]: Ignition finished successfully Sep 5 23:56:27.639834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:56:27.646725 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:56:27.659818 ignition[795]: Ignition 2.19.0 Sep 5 23:56:27.660841 ignition[795]: Stage: disks Sep 5 23:56:27.661114 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:27.661129 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:27.662519 ignition[795]: disks: disks passed Sep 5 23:56:27.662597 ignition[795]: Ignition finished successfully Sep 5 23:56:27.664498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:56:27.666792 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:56:27.668156 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:56:27.669610 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:56:27.670153 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:56:27.670791 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:56:27.678770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:56:27.712769 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 5 23:56:27.718575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:56:27.724588 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:56:27.775475 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:56:27.776846 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:56:27.778527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:56:27.789636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:56:27.793703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:56:27.796723 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:56:27.800578 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:56:27.800628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:56:27.807278 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:56:27.810793 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Sep 5 23:56:27.814025 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:56:27.814101 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:56:27.814114 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:56:27.820736 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:56:27.825604 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:56:27.825633 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:56:27.827553 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:56:27.876206 coreos-metadata[813]: Sep 05 23:56:27.875 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 5 23:56:27.878749 coreos-metadata[813]: Sep 05 23:56:27.878 INFO Fetch successful Sep 5 23:56:27.879337 coreos-metadata[813]: Sep 05 23:56:27.878 INFO wrote hostname ci-4081-3-5-n-4ef3874a70 to /sysroot/etc/hostname Sep 5 23:56:27.882672 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:56:27.883443 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:56:27.889509 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:56:27.894361 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:56:27.899941 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:56:28.003288 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:56:28.011629 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:56:28.018792 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:56:28.022493 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:56:28.043947 ignition[930]: INFO : Ignition 2.19.0 Sep 5 23:56:28.044912 ignition[930]: INFO : Stage: mount Sep 5 23:56:28.045672 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:28.047324 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:28.047324 ignition[930]: INFO : mount: mount passed Sep 5 23:56:28.047324 ignition[930]: INFO : Ignition finished successfully Sep 5 23:56:28.048524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:56:28.050920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:56:28.058636 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:56:28.140191 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:56:28.147821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:56:28.160487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Sep 5 23:56:28.162702 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:56:28.162904 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:56:28.163056 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:56:28.166447 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:56:28.166506 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:56:28.169501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:56:28.194972 ignition[960]: INFO : Ignition 2.19.0 Sep 5 23:56:28.194972 ignition[960]: INFO : Stage: files Sep 5 23:56:28.194972 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:28.194972 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:28.199053 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:56:28.199053 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:56:28.199053 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:56:28.203506 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:56:28.203506 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:56:28.203506 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:56:28.203506 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:56:28.203506 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 23:56:28.200879 unknown[960]: wrote ssh authorized keys file for user: core Sep 5 23:56:28.370467 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:56:28.492724 systemd-networkd[777]: eth1: Gained IPv6LL Sep 5 23:56:28.940874 systemd-networkd[777]: eth0: Gained IPv6LL Sep 5 23:56:28.943715 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:56:28.946816 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 23:56:29.219024 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:56:29.546638 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:56:29.546638 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:56:29.548997 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:56:29.548997 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:56:29.548997 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:56:29.548997 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 23:56:29.554696 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:56:29.554696 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:56:29.554696 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 23:56:29.554696 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:56:29.554696 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:56:29.554696 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:56:29.554696 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:56:29.554696 ignition[960]: INFO : files: files passed Sep 5 23:56:29.554696 ignition[960]: INFO : Ignition finished successfully Sep 5 23:56:29.551850 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:56:29.558650 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:56:29.564529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:56:29.568839 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:56:29.569684 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:56:29.577100 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:56:29.577100 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:56:29.580313 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:56:29.583179 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:56:29.585802 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:56:29.592784 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:56:29.638698 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:56:29.638859 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:56:29.641169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:56:29.643123 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:56:29.644364 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:56:29.649666 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:56:29.664852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:56:29.673713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:56:29.686752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:56:29.688231 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:56:29.689659 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:56:29.690348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:56:29.690609 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:56:29.692617 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:56:29.694553 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:56:29.695548 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:56:29.696623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:56:29.697803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:56:29.699038 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:56:29.700092 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:56:29.701311 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:56:29.702469 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:56:29.703529 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:56:29.704538 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:56:29.704666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:56:29.705942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:56:29.706653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:56:29.707747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:56:29.707822 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:56:29.709014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:56:29.709139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:56:29.710650 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:56:29.710764 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:56:29.712262 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:56:29.712356 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:56:29.713261 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:56:29.713355 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:56:29.722937 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:56:29.724919 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:56:29.725179 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:56:29.728678 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:56:29.729756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:56:29.729883 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:56:29.732401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:56:29.732546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:56:29.744236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:56:29.746444 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:56:29.753452 ignition[1013]: INFO : Ignition 2.19.0 Sep 5 23:56:29.753452 ignition[1013]: INFO : Stage: umount Sep 5 23:56:29.753452 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:56:29.753452 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:56:29.758555 ignition[1013]: INFO : umount: umount passed Sep 5 23:56:29.758555 ignition[1013]: INFO : Ignition finished successfully Sep 5 23:56:29.756396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:56:29.760165 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:56:29.760564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:56:29.762152 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:56:29.762204 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:56:29.767539 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:56:29.767626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:56:29.772031 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:56:29.772165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:56:29.774297 systemd[1]: Stopped target network.target - Network. Sep 5 23:56:29.775032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:56:29.775101 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:56:29.778070 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:56:29.778628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:56:29.785999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:56:29.788043 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:56:29.790745 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:56:29.791330 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:56:29.791375 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:56:29.792031 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:56:29.792070 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:56:29.792653 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:56:29.792705 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:56:29.793658 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:56:29.793711 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:56:29.794925 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:56:29.796010 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:56:29.799374 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:56:29.799949 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:56:29.801846 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:56:29.801941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:56:29.802835 systemd-networkd[777]: eth1: DHCPv6 lease lost Sep 5 23:56:29.804468 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:56:29.804542 systemd-networkd[777]: eth0: DHCPv6 lease lost Sep 5 23:56:29.804597 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:56:29.809652 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:56:29.809773 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:56:29.812134 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:56:29.812182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:56:29.817622 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:56:29.818460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:56:29.818563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:56:29.821767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:56:29.821886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:56:29.823851 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:56:29.823913 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:56:29.825177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:56:29.825230 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:56:29.826750 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:56:29.840589 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:56:29.840769 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:56:29.842771 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:56:29.842825 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:56:29.844322 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:56:29.844355 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:56:29.845285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:56:29.845332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:56:29.847083 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:56:29.847133 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:56:29.848556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:56:29.848600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:56:29.860831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:56:29.862850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:56:29.862960 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:56:29.864900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:56:29.865018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:29.869171 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:56:29.869328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:56:29.874968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:56:29.875097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:56:29.876814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:56:29.886764 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:56:29.894526 systemd[1]: Switching root. Sep 5 23:56:29.936912 systemd-journald[237]: Journal stopped Sep 5 23:56:30.762598 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 23:56:30.762661 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:56:30.762673 kernel: SELinux: policy capability open_perms=1 Sep 5 23:56:30.762686 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:56:30.762700 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:56:30.762709 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:56:30.762719 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:56:30.762732 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:56:30.762742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:56:30.762751 kernel: audit: type=1403 audit(1757116590.065:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:56:30.762761 systemd[1]: Successfully loaded SELinux policy in 37.008ms. Sep 5 23:56:30.762781 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.040ms. Sep 5 23:56:30.762796 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:56:30.762806 systemd[1]: Detected virtualization kvm. Sep 5 23:56:30.762817 systemd[1]: Detected architecture arm64. Sep 5 23:56:30.762826 systemd[1]: Detected first boot. Sep 5 23:56:30.762836 systemd[1]: Hostname set to . Sep 5 23:56:30.762846 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:56:30.762857 zram_generator::config[1055]: No configuration found. Sep 5 23:56:30.762870 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:56:30.762880 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:56:30.762890 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:56:30.762900 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:56:30.762911 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:56:30.762921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:56:30.762931 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:56:30.762945 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:56:30.762956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:56:30.763004 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:56:30.763018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:56:30.763028 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:56:30.763038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:56:30.763049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:56:30.763061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:56:30.763071 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:56:30.763081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:56:30.763092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:56:30.763109 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:56:30.763124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:56:30.763135 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:56:30.763148 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:56:30.763161 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:56:30.763172 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:56:30.763183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:56:30.763194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:56:30.763204 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:56:30.763214 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:56:30.763224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:56:30.763235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:56:30.763245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:56:30.763255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:56:30.763266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:56:30.763276 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:56:30.763287 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:56:30.763299 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:56:30.763309 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:56:30.763319 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:56:30.763330 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:56:30.763340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:56:30.763351 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:56:30.763362 systemd[1]: Reached target machines.target - Containers. Sep 5 23:56:30.763375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:56:30.763386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:56:30.763396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:56:30.763408 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:56:30.763440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:56:30.763456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:56:30.763467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:56:30.763478 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:56:30.763489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:56:30.763499 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:56:30.763510 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:56:30.763525 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:56:30.763535 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:56:30.763545 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:56:30.763559 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:56:30.763570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:56:30.763580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:56:30.763591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:56:30.763625 systemd-journald[1118]: Collecting audit messages is disabled. Sep 5 23:56:30.763647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:56:30.763657 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:56:30.763670 systemd[1]: Stopped verity-setup.service. Sep 5 23:56:30.763681 systemd-journald[1118]: Journal started Sep 5 23:56:30.763702 systemd-journald[1118]: Runtime Journal (/run/log/journal/e7f7eff60d944020ad8d532dbc248d17) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:56:30.548211 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:56:30.571027 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:56:30.571759 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:56:30.769456 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:56:30.774244 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:56:30.776757 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:56:30.778760 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:56:30.780126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:56:30.781111 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:56:30.784441 kernel: ACPI: bus type drm_connector registered Sep 5 23:56:30.784748 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:56:30.787991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:56:30.789175 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:56:30.789331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:56:30.791366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:56:30.791534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:56:30.792501 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:56:30.792637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:56:30.794789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:56:30.794947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:56:30.799086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:56:30.802618 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:56:30.812934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:56:30.820444 kernel: loop: module loaded Sep 5 23:56:30.821257 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:56:30.823535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:56:30.832218 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:56:30.844180 kernel: fuse: init (API version 7.39) Sep 5 23:56:30.842463 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:56:30.843112 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:56:30.843169 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:56:30.847605 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:56:30.852614 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:56:30.855615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:56:30.858615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:56:30.861631 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:56:30.863756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:56:30.864449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:56:30.865592 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:56:30.866272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:56:30.867623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:56:30.872615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:56:30.877468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:56:30.879858 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:56:30.880047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:56:30.880941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:56:30.883865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:56:30.894528 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:56:30.909685 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:56:30.918593 systemd-journald[1118]: Time spent on flushing to /var/log/journal/e7f7eff60d944020ad8d532dbc248d17 is 97.621ms for 1123 entries. Sep 5 23:56:30.918593 systemd-journald[1118]: System Journal (/var/log/journal/e7f7eff60d944020ad8d532dbc248d17) is 8.0M, max 584.8M, 576.8M free. Sep 5 23:56:31.028493 systemd-journald[1118]: Received client request to flush runtime journal. Sep 5 23:56:31.028556 kernel: loop0: detected capacity change from 0 to 8 Sep 5 23:56:31.028579 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:56:31.028595 kernel: loop1: detected capacity change from 0 to 114328 Sep 5 23:56:30.920626 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:56:30.943226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:56:30.961469 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:56:30.966229 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:56:30.975647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:56:30.992986 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:56:31.006699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:56:31.014687 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:56:31.029520 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:56:31.033677 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:56:31.048704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:56:31.050936 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:56:31.053729 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 5 23:56:31.053756 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 5 23:56:31.062601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:56:31.068462 kernel: loop2: detected capacity change from 0 to 114432 Sep 5 23:56:31.076994 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:56:31.107444 kernel: loop3: detected capacity change from 0 to 211168 Sep 5 23:56:31.155483 kernel: loop4: detected capacity change from 0 to 8 Sep 5 23:56:31.162484 kernel: loop5: detected capacity change from 0 to 114328 Sep 5 23:56:31.176444 kernel: loop6: detected capacity change from 0 to 114432 Sep 5 23:56:31.204443 kernel: loop7: detected capacity change from 0 to 211168 Sep 5 23:56:31.218507 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 5 23:56:31.219051 (sd-merge)[1195]: Merged extensions into '/usr'. Sep 5 23:56:31.225480 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:56:31.225505 systemd[1]: Reloading... Sep 5 23:56:31.341460 zram_generator::config[1225]: No configuration found. Sep 5 23:56:31.463890 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:56:31.499888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:56:31.547296 systemd[1]: Reloading finished in 320 ms. Sep 5 23:56:31.572095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:56:31.573740 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:56:31.584787 systemd[1]: Starting ensure-sysext.service... Sep 5 23:56:31.587197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:56:31.595928 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:56:31.595954 systemd[1]: Reloading... Sep 5 23:56:31.639872 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:56:31.640180 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:56:31.640850 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:56:31.641103 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 5 23:56:31.641151 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 5 23:56:31.646140 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:56:31.646155 systemd-tmpfiles[1260]: Skipping /boot Sep 5 23:56:31.656917 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:56:31.656938 systemd-tmpfiles[1260]: Skipping /boot Sep 5 23:56:31.703466 zram_generator::config[1286]: No configuration found. Sep 5 23:56:31.811688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:56:31.859076 systemd[1]: Reloading finished in 262 ms. Sep 5 23:56:31.879432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:56:31.880737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:56:31.892788 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:56:31.904660 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:56:31.908353 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:56:31.912945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:56:31.916662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:56:31.921754 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:56:31.928048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:56:31.931538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:56:31.935762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:56:31.947782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:56:31.948466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:56:31.950472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:56:31.950625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:56:31.954731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:56:31.960000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:56:31.961710 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:56:31.962616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:56:31.972227 systemd[1]: Finished ensure-sysext.service. Sep 5 23:56:31.980678 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:56:31.990107 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:56:31.996819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:56:31.997233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:56:32.009004 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:56:32.010673 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:56:32.011869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:56:32.012087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:56:32.013743 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:56:32.013914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:56:32.016339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:56:32.016794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:56:32.026292 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Sep 5 23:56:32.031584 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:56:32.032170 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:56:32.034317 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:56:32.055465 augenrules[1361]: No rules Sep 5 23:56:32.057627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:56:32.060009 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:56:32.076261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:56:32.092408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:56:32.099475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:56:32.102482 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:56:32.164762 systemd-networkd[1376]: lo: Link UP Sep 5 23:56:32.164781 systemd-networkd[1376]: lo: Gained carrier Sep 5 23:56:32.165675 systemd-networkd[1376]: Enumeration completed Sep 5 23:56:32.165790 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:56:32.172739 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:56:32.200405 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:56:32.232086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:56:32.233268 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:56:32.247884 systemd-resolved[1329]: Positive Trust Anchors: Sep 5 23:56:32.248259 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:56:32.248348 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:56:32.253159 systemd-resolved[1329]: Using system hostname 'ci-4081-3-5-n-4ef3874a70'. Sep 5 23:56:32.257338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:56:32.258351 systemd[1]: Reached target network.target - Network. Sep 5 23:56:32.258923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:56:32.317486 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:32.317498 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:56:32.318836 systemd-networkd[1376]: eth1: Link UP Sep 5 23:56:32.318849 systemd-networkd[1376]: eth1: Gained carrier Sep 5 23:56:32.318871 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:32.321512 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1377) Sep 5 23:56:32.328508 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:56:32.339604 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:56:32.341778 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:32.387242 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 5 23:56:32.387378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:56:32.413025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:56:32.418335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:56:32.424673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:56:32.427590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:56:32.427641 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:56:32.428030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:56:32.428199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:56:32.443880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:56:32.444175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:56:32.445868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:56:32.451909 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:56:32.456137 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:32.456155 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:56:32.457586 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:32.457981 systemd-networkd[1376]: eth0: Link UP Sep 5 23:56:32.457989 systemd-networkd[1376]: eth0: Gained carrier Sep 5 23:56:32.458011 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:56:32.459639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:56:32.460648 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:56:32.461032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:56:32.465148 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 5 23:56:32.464305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:56:32.466441 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 5 23:56:32.466515 kernel: [drm] features: -context_init Sep 5 23:56:32.468705 kernel: [drm] number of scanouts: 1 Sep 5 23:56:32.468761 kernel: [drm] number of cap sets: 0 Sep 5 23:56:32.468774 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 5 23:56:32.470504 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:32.475436 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 23:56:32.482866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:56:32.483459 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 5 23:56:32.498432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:56:32.498735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:32.504807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:56:32.508499 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:56:32.523517 systemd-networkd[1376]: eth0: DHCPv4 address 138.199.175.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:56:32.524229 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:32.524925 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:32.577601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:56:32.611483 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:56:32.618663 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:56:32.630894 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:56:32.660225 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:56:32.662578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:56:32.664101 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:56:32.665320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:56:32.666231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:56:32.667221 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:56:32.668026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:56:32.668763 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:56:32.669478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:56:32.669518 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:56:32.670013 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:56:32.671882 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:56:32.674141 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:56:32.683378 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:56:32.686188 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:56:32.688025 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:56:32.689114 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:56:32.689886 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:56:32.690549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:56:32.690655 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:56:32.692545 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:56:32.696582 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:56:32.698375 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:56:32.706628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:56:32.712045 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:56:32.713974 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:56:32.715638 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:56:32.720647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:56:32.725596 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:56:32.731343 jq[1449]: false Sep 5 23:56:32.731975 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 5 23:56:32.740653 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:56:32.746757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:56:32.755486 extend-filesystems[1450]: Found loop4 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found loop5 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found loop6 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found loop7 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda1 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda2 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda3 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found usr Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda4 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda6 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda7 Sep 5 23:56:32.755486 extend-filesystems[1450]: Found sda9 Sep 5 23:56:32.755486 extend-filesystems[1450]: Checking size of /dev/sda9 Sep 5 23:56:32.804398 coreos-metadata[1445]: Sep 05 23:56:32.765 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 5 23:56:32.804398 coreos-metadata[1445]: Sep 05 23:56:32.771 INFO Fetch successful Sep 5 23:56:32.804398 coreos-metadata[1445]: Sep 05 23:56:32.778 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 5 23:56:32.804398 coreos-metadata[1445]: Sep 05 23:56:32.779 INFO Fetch successful Sep 5 23:56:32.766300 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:56:32.821237 extend-filesystems[1450]: Resized partition /dev/sda9 Sep 5 23:56:32.768285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:56:32.826018 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:56:32.769408 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:56:32.771045 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:56:32.776643 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:56:32.779771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:56:32.779939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:56:32.788526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:56:32.823134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:56:32.823317 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:56:32.841502 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 5 23:56:32.840713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:56:32.840504 dbus-daemon[1446]: [system] SELinux support is enabled Sep 5 23:56:32.845923 jq[1461]: true Sep 5 23:56:32.860319 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:56:32.860365 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:56:32.861266 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:56:32.861282 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:56:32.869528 tar[1463]: linux-arm64/LICENSE Sep 5 23:56:32.869528 tar[1463]: linux-arm64/helm Sep 5 23:56:32.871855 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:56:32.873761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:56:32.884347 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:56:32.902510 jq[1490]: true Sep 5 23:56:32.927049 update_engine[1460]: I20250905 23:56:32.926682 1460 main.cc:92] Flatcar Update Engine starting Sep 5 23:56:32.945020 update_engine[1460]: I20250905 23:56:32.944772 1460 update_check_scheduler.cc:74] Next update check in 3m42s Sep 5 23:56:32.944669 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:56:32.948551 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1371) Sep 5 23:56:32.946974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:56:32.988682 systemd-logind[1457]: New seat seat0. Sep 5 23:56:32.990689 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:56:32.992851 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:56:32.994754 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:56:32.995040 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 5 23:56:33.000208 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:56:33.018441 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 5 23:56:33.034296 extend-filesystems[1476]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 5 23:56:33.034296 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 5 23:56:33.034296 extend-filesystems[1476]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 5 23:56:33.044947 extend-filesystems[1450]: Resized filesystem in /dev/sda9 Sep 5 23:56:33.044947 extend-filesystems[1450]: Found sr0 Sep 5 23:56:33.040969 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:56:33.041241 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:56:33.063596 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:56:33.070173 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:56:33.081255 systemd[1]: Starting sshkeys.service... Sep 5 23:56:33.111922 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:56:33.123226 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:56:33.175088 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:56:33.185779 coreos-metadata[1528]: Sep 05 23:56:33.185 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 5 23:56:33.187739 coreos-metadata[1528]: Sep 05 23:56:33.186 INFO Fetch successful Sep 5 23:56:33.191863 unknown[1528]: wrote ssh authorized keys file for user: core Sep 5 23:56:33.223505 containerd[1480]: time="2025-09-05T23:56:33.223394480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:56:33.236907 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:56:33.238431 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:56:33.244493 systemd[1]: Finished sshkeys.service. Sep 5 23:56:33.247195 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:56:33.284911 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:56:33.287209 containerd[1480]: time="2025-09-05T23:56:33.287149280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.293842 containerd[1480]: time="2025-09-05T23:56:33.293781040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:56:33.293842 containerd[1480]: time="2025-09-05T23:56:33.293830440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:56:33.293842 containerd[1480]: time="2025-09-05T23:56:33.293849280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:56:33.294106 containerd[1480]: time="2025-09-05T23:56:33.294079080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:56:33.294135 containerd[1480]: time="2025-09-05T23:56:33.294107000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.294213 containerd[1480]: time="2025-09-05T23:56:33.294191960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:56:33.294213 containerd[1480]: time="2025-09-05T23:56:33.294210280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.294437 containerd[1480]: time="2025-09-05T23:56:33.294388120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:56:33.295387 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:56:33.296740 containerd[1480]: time="2025-09-05T23:56:33.294415640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.296797 containerd[1480]: time="2025-09-05T23:56:33.296763680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:56:33.296797 containerd[1480]: time="2025-09-05T23:56:33.296787480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.297287 containerd[1480]: time="2025-09-05T23:56:33.296939760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.298464 containerd[1480]: time="2025-09-05T23:56:33.297722240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:56:33.298464 containerd[1480]: time="2025-09-05T23:56:33.297886640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:56:33.298464 containerd[1480]: time="2025-09-05T23:56:33.297903480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:56:33.298464 containerd[1480]: time="2025-09-05T23:56:33.298008240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:56:33.298464 containerd[1480]: time="2025-09-05T23:56:33.298054040Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:56:33.306030 containerd[1480]: time="2025-09-05T23:56:33.305824920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:56:33.306030 containerd[1480]: time="2025-09-05T23:56:33.305901440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:56:33.306030 containerd[1480]: time="2025-09-05T23:56:33.305919360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:56:33.306030 containerd[1480]: time="2025-09-05T23:56:33.305936880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:56:33.306030 containerd[1480]: time="2025-09-05T23:56:33.305970520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:56:33.306215 containerd[1480]: time="2025-09-05T23:56:33.306152120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307039480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307226440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307244160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307257360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307272560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307287760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307300880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307314640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307336040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307349000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307361880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307434 containerd[1480]: time="2025-09-05T23:56:33.307395040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307416080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307485320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307509440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307523360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307535560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307549040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307561720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307575480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307590600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307696 containerd[1480]: time="2025-09-05T23:56:33.307613840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307702840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307722280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307735800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307769000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307795720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307809560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.307883 containerd[1480]: time="2025-09-05T23:56:33.307820920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:56:33.308025 containerd[1480]: time="2025-09-05T23:56:33.307965840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:56:33.308025 containerd[1480]: time="2025-09-05T23:56:33.307989000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:56:33.308025 containerd[1480]: time="2025-09-05T23:56:33.308000840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:56:33.308025 containerd[1480]: time="2025-09-05T23:56:33.308014720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:56:33.308025 containerd[1480]: time="2025-09-05T23:56:33.308025440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.308118 containerd[1480]: time="2025-09-05T23:56:33.308038480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:56:33.308118 containerd[1480]: time="2025-09-05T23:56:33.308048760Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:56:33.308118 containerd[1480]: time="2025-09-05T23:56:33.308058880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:56:33.309785 containerd[1480]: time="2025-09-05T23:56:33.309704240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:56:33.309926 containerd[1480]: time="2025-09-05T23:56:33.309802680Z" level=info msg="Connect containerd service" Sep 5 23:56:33.309926 containerd[1480]: time="2025-09-05T23:56:33.309848840Z" level=info msg="using legacy CRI server" Sep 5 23:56:33.309926 containerd[1480]: time="2025-09-05T23:56:33.309855920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:56:33.310033 containerd[1480]: time="2025-09-05T23:56:33.309970760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:56:33.311429 containerd[1480]: time="2025-09-05T23:56:33.310711720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311591320Z" level=info msg="Start subscribing containerd event" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311647440Z" level=info msg="Start recovering state" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311715880Z" level=info msg="Start event monitor" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311727600Z" level=info msg="Start snapshots syncer" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311738120Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:56:33.312057 containerd[1480]: time="2025-09-05T23:56:33.311745960Z" level=info msg="Start streaming server" Sep 5 23:56:33.312038 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:56:33.312261 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:56:33.315034 containerd[1480]: time="2025-09-05T23:56:33.314993520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:56:33.316566 containerd[1480]: time="2025-09-05T23:56:33.316519000Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:56:33.317841 containerd[1480]: time="2025-09-05T23:56:33.317802320Z" level=info msg="containerd successfully booted in 0.096517s" Sep 5 23:56:33.320874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:56:33.321729 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:56:33.345394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:56:33.353236 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:56:33.355738 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:56:33.357734 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:56:33.548631 systemd-networkd[1376]: eth1: Gained IPv6LL Sep 5 23:56:33.549217 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:33.552377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:56:33.554117 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:56:33.566179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:56:33.568595 tar[1463]: linux-arm64/README.md Sep 5 23:56:33.569790 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:56:33.595498 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:56:33.600843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:56:34.252596 systemd-networkd[1376]: eth0: Gained IPv6LL Sep 5 23:56:34.253632 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 5 23:56:34.368817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:56:34.369354 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:56:34.371379 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:56:34.372833 systemd[1]: Startup finished in 802ms (kernel) + 5.372s (initrd) + 4.343s (userspace) = 10.519s. Sep 5 23:56:34.906916 kubelet[1576]: E0905 23:56:34.906871 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:56:34.910359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:56:34.910634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:56:45.161319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:56:45.167744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:56:45.308714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:56:45.310462 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:56:45.357152 kubelet[1595]: E0905 23:56:45.357106 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:56:45.361141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:56:45.361402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:56:55.612093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:56:55.626975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:56:55.751829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:56:55.757536 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:56:55.798703 kubelet[1610]: E0905 23:56:55.798656 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:56:55.801855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:56:55.802054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:04.508770 systemd-timesyncd[1345]: Contacted time server 78.47.168.188:123 (2.flatcar.pool.ntp.org). Sep 5 23:57:04.508911 systemd-timesyncd[1345]: Initial clock synchronization to Fri 2025-09-05 23:57:04.364807 UTC. Sep 5 23:57:06.053065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:57:06.058766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:06.190669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:06.198065 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:57:06.242999 kubelet[1625]: E0905 23:57:06.242929 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:57:06.245804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:57:06.245979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:15.216146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:57:15.221803 systemd[1]: Started sshd@0-138.199.175.7:22-139.178.68.195:48624.service - OpenSSH per-connection server daemon (139.178.68.195:48624). Sep 5 23:57:16.218505 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 48624 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:16.220667 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:16.236071 systemd-logind[1457]: New session 1 of user core. Sep 5 23:57:16.236447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:57:16.242792 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:57:16.258456 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:57:16.259352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:57:16.267835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:16.270263 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:57:16.279960 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:57:16.399719 systemd[1639]: Queued start job for default target default.target. Sep 5 23:57:16.401401 systemd[1639]: Created slice app.slice - User Application Slice. Sep 5 23:57:16.401475 systemd[1639]: Reached target paths.target - Paths. Sep 5 23:57:16.401488 systemd[1639]: Reached target timers.target - Timers. Sep 5 23:57:16.403646 systemd[1639]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:57:16.412796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:16.416274 systemd[1639]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:57:16.417076 systemd[1639]: Reached target sockets.target - Sockets. Sep 5 23:57:16.417106 systemd[1639]: Reached target basic.target - Basic System. Sep 5 23:57:16.417149 systemd[1639]: Reached target default.target - Main User Target. Sep 5 23:57:16.417175 systemd[1639]: Startup finished in 128ms. Sep 5 23:57:16.417940 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:57:16.418132 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:57:16.421233 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:57:16.467988 kubelet[1653]: E0905 23:57:16.467939 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:57:16.471291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:57:16.471472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:17.123872 systemd[1]: Started sshd@1-138.199.175.7:22-139.178.68.195:48626.service - OpenSSH per-connection server daemon (139.178.68.195:48626). Sep 5 23:57:17.855356 update_engine[1460]: I20250905 23:57:17.855161 1460 update_attempter.cc:509] Updating boot flags... Sep 5 23:57:17.908475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1675) Sep 5 23:57:17.963704 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1674) Sep 5 23:57:18.025326 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1674) Sep 5 23:57:18.124530 sshd[1664]: Accepted publickey for core from 139.178.68.195 port 48626 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:18.127395 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:18.134734 systemd-logind[1457]: New session 2 of user core. Sep 5 23:57:18.144725 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:57:18.814106 sshd[1664]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:18.820184 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:57:18.820616 systemd[1]: sshd@1-138.199.175.7:22-139.178.68.195:48626.service: Deactivated successfully. Sep 5 23:57:18.823080 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:57:18.825495 systemd-logind[1457]: Removed session 2. Sep 5 23:57:18.994014 systemd[1]: Started sshd@2-138.199.175.7:22-139.178.68.195:48638.service - OpenSSH per-connection server daemon (139.178.68.195:48638). Sep 5 23:57:19.977765 sshd[1692]: Accepted publickey for core from 139.178.68.195 port 48638 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:19.980502 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:19.988542 systemd-logind[1457]: New session 3 of user core. Sep 5 23:57:19.998174 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:57:20.661354 sshd[1692]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:20.665297 systemd[1]: sshd@2-138.199.175.7:22-139.178.68.195:48638.service: Deactivated successfully. Sep 5 23:57:20.666925 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:57:20.670662 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:57:20.672305 systemd-logind[1457]: Removed session 3. Sep 5 23:57:20.849025 systemd[1]: Started sshd@3-138.199.175.7:22-139.178.68.195:57388.service - OpenSSH per-connection server daemon (139.178.68.195:57388). Sep 5 23:57:21.838784 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 57388 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:21.841603 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:21.848482 systemd-logind[1457]: New session 4 of user core. Sep 5 23:57:21.855756 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:57:22.529682 sshd[1699]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:22.535371 systemd[1]: sshd@3-138.199.175.7:22-139.178.68.195:57388.service: Deactivated successfully. Sep 5 23:57:22.538709 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:57:22.539900 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:57:22.541105 systemd-logind[1457]: Removed session 4. Sep 5 23:57:22.715262 systemd[1]: Started sshd@4-138.199.175.7:22-139.178.68.195:57394.service - OpenSSH per-connection server daemon (139.178.68.195:57394). Sep 5 23:57:23.707048 sshd[1706]: Accepted publickey for core from 139.178.68.195 port 57394 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:23.709541 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:23.716266 systemd-logind[1457]: New session 5 of user core. Sep 5 23:57:23.722748 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:57:24.247629 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:57:24.248375 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:57:24.267570 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 5 23:57:24.430109 sshd[1706]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:24.434533 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:57:24.434798 systemd[1]: sshd@4-138.199.175.7:22-139.178.68.195:57394.service: Deactivated successfully. Sep 5 23:57:24.436686 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:57:24.438839 systemd-logind[1457]: Removed session 5. Sep 5 23:57:24.608052 systemd[1]: Started sshd@5-138.199.175.7:22-139.178.68.195:57402.service - OpenSSH per-connection server daemon (139.178.68.195:57402). Sep 5 23:57:25.599884 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 57402 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:25.601916 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:25.606882 systemd-logind[1457]: New session 6 of user core. Sep 5 23:57:25.614750 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:57:26.131288 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:57:26.131686 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:57:26.135799 sudo[1718]: pam_unix(sudo:session): session closed for user root Sep 5 23:57:26.143105 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:57:26.143413 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:57:26.158823 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:57:26.174295 auditctl[1721]: No rules Sep 5 23:57:26.175136 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:57:26.175571 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:57:26.182961 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:57:26.213085 augenrules[1739]: No rules Sep 5 23:57:26.214969 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:57:26.218722 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 5 23:57:26.381141 sshd[1714]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:26.385813 systemd[1]: sshd@5-138.199.175.7:22-139.178.68.195:57402.service: Deactivated successfully. Sep 5 23:57:26.391114 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:57:26.392568 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:57:26.393888 systemd-logind[1457]: Removed session 6. Sep 5 23:57:26.555241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:57:26.562785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:26.566013 systemd[1]: Started sshd@6-138.199.175.7:22-139.178.68.195:57412.service - OpenSSH per-connection server daemon (139.178.68.195:57412). Sep 5 23:57:26.681100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:26.686762 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:57:26.723529 kubelet[1757]: E0905 23:57:26.723464 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:57:26.727437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:57:26.727804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:27.562651 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 57412 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:27.564794 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:27.570528 systemd-logind[1457]: New session 7 of user core. Sep 5 23:57:27.576683 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:57:28.095722 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:57:28.096360 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:57:28.403926 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:57:28.405261 (dockerd)[1780]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:57:28.656415 dockerd[1780]: time="2025-09-05T23:57:28.655661821Z" level=info msg="Starting up" Sep 5 23:57:28.747361 dockerd[1780]: time="2025-09-05T23:57:28.747304730Z" level=info msg="Loading containers: start." Sep 5 23:57:28.853458 kernel: Initializing XFRM netlink socket Sep 5 23:57:28.948049 systemd-networkd[1376]: docker0: Link UP Sep 5 23:57:28.977230 dockerd[1780]: time="2025-09-05T23:57:28.977161599Z" level=info msg="Loading containers: done." Sep 5 23:57:28.992006 dockerd[1780]: time="2025-09-05T23:57:28.991920306Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:57:28.992181 dockerd[1780]: time="2025-09-05T23:57:28.992097838Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:57:28.992640 dockerd[1780]: time="2025-09-05T23:57:28.992612969Z" level=info msg="Daemon has completed initialization" Sep 5 23:57:29.031100 dockerd[1780]: time="2025-09-05T23:57:29.030893715Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:57:29.031662 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:57:30.131924 containerd[1480]: time="2025-09-05T23:57:30.131571177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 23:57:30.772461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623312195.mount: Deactivated successfully. Sep 5 23:57:32.135779 containerd[1480]: time="2025-09-05T23:57:32.135701967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:32.138187 containerd[1480]: time="2025-09-05T23:57:32.137786469Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352705" Sep 5 23:57:32.141458 containerd[1480]: time="2025-09-05T23:57:32.139255113Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:32.147608 containerd[1480]: time="2025-09-05T23:57:32.147560019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:32.148366 containerd[1480]: time="2025-09-05T23:57:32.148329524Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.016710696s" Sep 5 23:57:32.148444 containerd[1480]: time="2025-09-05T23:57:32.148367785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 23:57:32.150132 containerd[1480]: time="2025-09-05T23:57:32.150008104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 23:57:33.801650 containerd[1480]: time="2025-09-05T23:57:33.801596439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:33.803899 containerd[1480]: time="2025-09-05T23:57:33.803554043Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536997" Sep 5 23:57:33.805928 containerd[1480]: time="2025-09-05T23:57:33.805354034Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:33.809706 containerd[1480]: time="2025-09-05T23:57:33.809661795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:33.810857 containerd[1480]: time="2025-09-05T23:57:33.810819101Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.660772335s" Sep 5 23:57:33.810984 containerd[1480]: time="2025-09-05T23:57:33.810967797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 23:57:33.811967 containerd[1480]: time="2025-09-05T23:57:33.811940862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 23:57:35.213322 containerd[1480]: time="2025-09-05T23:57:35.213192564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:35.216266 containerd[1480]: time="2025-09-05T23:57:35.216218215Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292034" Sep 5 23:57:35.218041 containerd[1480]: time="2025-09-05T23:57:35.217958966Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:35.222804 containerd[1480]: time="2025-09-05T23:57:35.222735844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:35.225579 containerd[1480]: time="2025-09-05T23:57:35.225126103Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.413143217s" Sep 5 23:57:35.225579 containerd[1480]: time="2025-09-05T23:57:35.225174807Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 23:57:35.226296 containerd[1480]: time="2025-09-05T23:57:35.226258772Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 23:57:36.449659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772045745.mount: Deactivated successfully. Sep 5 23:57:36.798919 containerd[1480]: time="2025-09-05T23:57:36.798585584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:36.800042 containerd[1480]: time="2025-09-05T23:57:36.799843205Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199985" Sep 5 23:57:36.801096 containerd[1480]: time="2025-09-05T23:57:36.800886250Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:36.803972 containerd[1480]: time="2025-09-05T23:57:36.803937129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:36.804774 containerd[1480]: time="2025-09-05T23:57:36.804735288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.577256114s" Sep 5 23:57:36.804833 containerd[1480]: time="2025-09-05T23:57:36.804775476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 23:57:36.805274 containerd[1480]: time="2025-09-05T23:57:36.805251133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 23:57:36.812838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 5 23:57:36.823773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:36.944194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:36.958193 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:57:37.003951 kubelet[1996]: E0905 23:57:37.003897 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:57:37.008783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:57:37.008926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:37.379227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383598003.mount: Deactivated successfully. Sep 5 23:57:38.482608 containerd[1480]: time="2025-09-05T23:57:38.482546024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:38.484175 containerd[1480]: time="2025-09-05T23:57:38.484093097Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 5 23:57:38.485365 containerd[1480]: time="2025-09-05T23:57:38.484715854Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:38.492524 containerd[1480]: time="2025-09-05T23:57:38.492460976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:38.493945 containerd[1480]: time="2025-09-05T23:57:38.493906822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.687967335s" Sep 5 23:57:38.494051 containerd[1480]: time="2025-09-05T23:57:38.494035165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 23:57:38.494952 containerd[1480]: time="2025-09-05T23:57:38.494929045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:57:39.034856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330221167.mount: Deactivated successfully. Sep 5 23:57:39.040903 containerd[1480]: time="2025-09-05T23:57:39.040842769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:39.041701 containerd[1480]: time="2025-09-05T23:57:39.041672584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 5 23:57:39.042677 containerd[1480]: time="2025-09-05T23:57:39.042404971Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:39.044975 containerd[1480]: time="2025-09-05T23:57:39.044935091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:39.046044 containerd[1480]: time="2025-09-05T23:57:39.046008635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.981523ms" Sep 5 23:57:39.046044 containerd[1480]: time="2025-09-05T23:57:39.046044550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:57:39.046754 containerd[1480]: time="2025-09-05T23:57:39.046544647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 23:57:39.599897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320831031.mount: Deactivated successfully. Sep 5 23:57:42.685013 containerd[1480]: time="2025-09-05T23:57:42.684934654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465339" Sep 5 23:57:42.686326 containerd[1480]: time="2025-09-05T23:57:42.685947905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:42.688867 containerd[1480]: time="2025-09-05T23:57:42.688808559Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:42.690595 containerd[1480]: time="2025-09-05T23:57:42.690212169Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.643632686s" Sep 5 23:57:42.690595 containerd[1480]: time="2025-09-05T23:57:42.690250605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 23:57:42.691148 containerd[1480]: time="2025-09-05T23:57:42.691121671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:47.063095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 5 23:57:47.072644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:47.193676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:47.198694 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:57:47.249066 kubelet[2143]: E0905 23:57:47.249023 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:57:47.252307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:57:47.252641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:57:48.682314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:48.694138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:48.717796 systemd[1]: Reloading requested from client PID 2157 ('systemctl') (unit session-7.scope)... Sep 5 23:57:48.717817 systemd[1]: Reloading... Sep 5 23:57:48.835585 zram_generator::config[2197]: No configuration found. Sep 5 23:57:48.937776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:57:49.009208 systemd[1]: Reloading finished in 291 ms. Sep 5 23:57:49.058523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:57:49.058623 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:57:49.059062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:49.068681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:49.200729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:49.203473 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:57:49.251062 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:57:49.251455 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:57:49.251515 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:57:49.251678 kubelet[2245]: I0905 23:57:49.251642 2245 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:57:49.957000 kubelet[2245]: I0905 23:57:49.956939 2245 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:57:49.957000 kubelet[2245]: I0905 23:57:49.956984 2245 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:57:49.957757 kubelet[2245]: I0905 23:57:49.957728 2245 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:57:49.989199 kubelet[2245]: E0905 23:57:49.989151 2245 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://138.199.175.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:57:49.991802 kubelet[2245]: I0905 23:57:49.991742 2245 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:57:50.003457 kubelet[2245]: E0905 23:57:50.002100 2245 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:57:50.003457 kubelet[2245]: I0905 23:57:50.002169 2245 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:57:50.006157 kubelet[2245]: I0905 23:57:50.006112 2245 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:57:50.008456 kubelet[2245]: I0905 23:57:50.008367 2245 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:57:50.008654 kubelet[2245]: I0905 23:57:50.008443 2245 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-4ef3874a70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:57:50.008751 kubelet[2245]: I0905 23:57:50.008722 2245 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:57:50.008751 kubelet[2245]: I0905 23:57:50.008737 2245 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:57:50.008998 kubelet[2245]: I0905 23:57:50.008966 2245 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:57:50.013264 kubelet[2245]: I0905 23:57:50.013208 2245 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:57:50.013264 kubelet[2245]: I0905 23:57:50.013246 2245 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:57:50.013264 kubelet[2245]: I0905 23:57:50.013281 2245 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:57:50.013264 kubelet[2245]: I0905 23:57:50.013308 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:57:50.021154 kubelet[2245]: E0905 23:57:50.020701 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://138.199.175.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-4ef3874a70&limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:57:50.021154 kubelet[2245]: E0905 23:57:50.020806 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://138.199.175.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:57:50.021308 kubelet[2245]: I0905 23:57:50.021233 2245 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:57:50.022933 kubelet[2245]: I0905 23:57:50.022083 2245 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:57:50.022933 kubelet[2245]: W0905 23:57:50.022234 2245 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:57:50.026370 kubelet[2245]: I0905 23:57:50.026340 2245 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:57:50.026508 kubelet[2245]: I0905 23:57:50.026394 2245 server.go:1289] "Started kubelet" Sep 5 23:57:50.030854 kubelet[2245]: I0905 23:57:50.030806 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:57:50.036219 kubelet[2245]: I0905 23:57:50.036178 2245 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:57:50.037774 kubelet[2245]: I0905 23:57:50.037736 2245 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:57:50.039314 kubelet[2245]: I0905 23:57:50.039265 2245 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:57:50.040449 kubelet[2245]: E0905 23:57:50.039687 2245 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" Sep 5 23:57:50.041501 kubelet[2245]: I0905 23:57:50.041481 2245 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:57:50.044181 kubelet[2245]: I0905 23:57:50.044147 2245 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:57:50.044253 kubelet[2245]: I0905 23:57:50.044209 2245 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:57:50.044779 kubelet[2245]: I0905 23:57:50.044725 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:57:50.045072 kubelet[2245]: I0905 23:57:50.045055 2245 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:57:50.046735 kubelet[2245]: I0905 23:57:50.046708 2245 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:57:50.046962 kubelet[2245]: I0905 23:57:50.046941 2245 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:57:50.048878 kubelet[2245]: E0905 23:57:50.047396 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.175.7:6443/api/v1/namespaces/default/events\": dial tcp 138.199.175.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-4ef3874a70.1862884d41d6aede default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-4ef3874a70,UID:ci-4081-3-5-n-4ef3874a70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-4ef3874a70,},FirstTimestamp:2025-09-05 23:57:50.02636259 +0000 UTC m=+0.817308457,LastTimestamp:2025-09-05 23:57:50.02636259 +0000 UTC m=+0.817308457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-4ef3874a70,}" Sep 5 23:57:50.049163 kubelet[2245]: E0905 23:57:50.049138 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": dial tcp 138.199.175.7:6443: connect: connection refused" interval="200ms" Sep 5 23:57:50.050390 kubelet[2245]: E0905 23:57:50.050255 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://138.199.175.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:57:50.050913 kubelet[2245]: E0905 23:57:50.050895 2245 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:57:50.052558 kubelet[2245]: I0905 23:57:50.051124 2245 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:57:50.058477 kubelet[2245]: I0905 23:57:50.058412 2245 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:57:50.059496 kubelet[2245]: I0905 23:57:50.059470 2245 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:57:50.059496 kubelet[2245]: I0905 23:57:50.059492 2245 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:57:50.059592 kubelet[2245]: I0905 23:57:50.059514 2245 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:57:50.059592 kubelet[2245]: I0905 23:57:50.059521 2245 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:57:50.059592 kubelet[2245]: E0905 23:57:50.059558 2245 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:57:50.065775 kubelet[2245]: E0905 23:57:50.065731 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://138.199.175.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:57:50.078125 kubelet[2245]: I0905 23:57:50.078090 2245 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:57:50.078125 kubelet[2245]: I0905 23:57:50.078118 2245 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:57:50.078274 kubelet[2245]: I0905 23:57:50.078138 2245 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:57:50.081101 kubelet[2245]: I0905 23:57:50.081022 2245 policy_none.go:49] "None policy: Start" Sep 5 23:57:50.081101 kubelet[2245]: I0905 23:57:50.081070 2245 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:57:50.081101 kubelet[2245]: I0905 23:57:50.081084 2245 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:57:50.087348 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:57:50.107080 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:57:50.112753 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:57:50.126318 kubelet[2245]: E0905 23:57:50.125744 2245 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:57:50.126318 kubelet[2245]: I0905 23:57:50.126028 2245 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:57:50.126318 kubelet[2245]: I0905 23:57:50.126044 2245 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:57:50.126867 kubelet[2245]: I0905 23:57:50.126355 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:57:50.129376 kubelet[2245]: E0905 23:57:50.129319 2245 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:57:50.129498 kubelet[2245]: E0905 23:57:50.129397 2245 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-4ef3874a70\" not found" Sep 5 23:57:50.178132 systemd[1]: Created slice kubepods-burstable-pod7e210808890da72ee3e94554af8c04e0.slice - libcontainer container kubepods-burstable-pod7e210808890da72ee3e94554af8c04e0.slice. Sep 5 23:57:50.188580 kubelet[2245]: E0905 23:57:50.187119 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.191709 systemd[1]: Created slice kubepods-burstable-pode541c017e1943efa824c8b3337db0933.slice - libcontainer container kubepods-burstable-pode541c017e1943efa824c8b3337db0933.slice. Sep 5 23:57:50.194962 kubelet[2245]: E0905 23:57:50.194930 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.197936 systemd[1]: Created slice kubepods-burstable-pod54e3e6c9d7915cef527fc424824822a9.slice - libcontainer container kubepods-burstable-pod54e3e6c9d7915cef527fc424824822a9.slice. Sep 5 23:57:50.200657 kubelet[2245]: E0905 23:57:50.200606 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.233545 kubelet[2245]: I0905 23:57:50.231089 2245 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.234162 kubelet[2245]: E0905 23:57:50.234121 2245 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.175.7:6443/api/v1/nodes\": dial tcp 138.199.175.7:6443: connect: connection refused" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246147 kubelet[2245]: I0905 23:57:50.245606 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246147 kubelet[2245]: I0905 23:57:50.245710 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246147 kubelet[2245]: I0905 23:57:50.245756 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246147 kubelet[2245]: I0905 23:57:50.245792 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246147 kubelet[2245]: I0905 23:57:50.245828 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246668 kubelet[2245]: I0905 23:57:50.245894 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e210808890da72ee3e94554af8c04e0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-4ef3874a70\" (UID: \"7e210808890da72ee3e94554af8c04e0\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246668 kubelet[2245]: I0905 23:57:50.245931 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246668 kubelet[2245]: I0905 23:57:50.245964 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.246668 kubelet[2245]: I0905 23:57:50.246000 2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.250241 kubelet[2245]: E0905 23:57:50.250141 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": dial tcp 138.199.175.7:6443: connect: connection refused" interval="400ms" Sep 5 23:57:50.437491 kubelet[2245]: I0905 23:57:50.437392 2245 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.438020 kubelet[2245]: E0905 23:57:50.437990 2245 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.175.7:6443/api/v1/nodes\": dial tcp 138.199.175.7:6443: connect: connection refused" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.490541 containerd[1480]: time="2025-09-05T23:57:50.489968294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-4ef3874a70,Uid:7e210808890da72ee3e94554af8c04e0,Namespace:kube-system,Attempt:0,}" Sep 5 23:57:50.496923 containerd[1480]: time="2025-09-05T23:57:50.496512875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-4ef3874a70,Uid:e541c017e1943efa824c8b3337db0933,Namespace:kube-system,Attempt:0,}" Sep 5 23:57:50.502340 containerd[1480]: time="2025-09-05T23:57:50.502228953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-4ef3874a70,Uid:54e3e6c9d7915cef527fc424824822a9,Namespace:kube-system,Attempt:0,}" Sep 5 23:57:50.652108 kubelet[2245]: E0905 23:57:50.651929 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": dial tcp 138.199.175.7:6443: connect: connection refused" interval="800ms" Sep 5 23:57:50.844566 kubelet[2245]: I0905 23:57:50.844531 2245 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:50.845268 kubelet[2245]: E0905 23:57:50.845203 2245 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.175.7:6443/api/v1/nodes\": dial tcp 138.199.175.7:6443: connect: connection refused" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:51.014241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587982452.mount: Deactivated successfully. Sep 5 23:57:51.022414 containerd[1480]: time="2025-09-05T23:57:51.022357601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:57:51.024035 containerd[1480]: time="2025-09-05T23:57:51.023776306Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:57:51.025327 containerd[1480]: time="2025-09-05T23:57:51.025285005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 5 23:57:51.026444 containerd[1480]: time="2025-09-05T23:57:51.026346494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:57:51.028100 containerd[1480]: time="2025-09-05T23:57:51.027206876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:57:51.028744 containerd[1480]: time="2025-09-05T23:57:51.028712856Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:57:51.030665 containerd[1480]: time="2025-09-05T23:57:51.030627648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:57:51.032450 containerd[1480]: time="2025-09-05T23:57:51.032317415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:57:51.034024 containerd[1480]: time="2025-09-05T23:57:51.033809515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.719989ms" Sep 5 23:57:51.035202 containerd[1480]: time="2025-09-05T23:57:51.034970437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.381128ms" Sep 5 23:57:51.037132 containerd[1480]: time="2025-09-05T23:57:51.037072297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.692394ms" Sep 5 23:57:51.181480 containerd[1480]: time="2025-09-05T23:57:51.181281895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:51.181804 containerd[1480]: time="2025-09-05T23:57:51.181628592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:51.181804 containerd[1480]: time="2025-09-05T23:57:51.181659550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.181897 containerd[1480]: time="2025-09-05T23:57:51.181764383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.192047 containerd[1480]: time="2025-09-05T23:57:51.191699719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:51.192047 containerd[1480]: time="2025-09-05T23:57:51.191771474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:51.192047 containerd[1480]: time="2025-09-05T23:57:51.191787153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.192047 containerd[1480]: time="2025-09-05T23:57:51.191866387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.193131 containerd[1480]: time="2025-09-05T23:57:51.192931756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:51.193131 containerd[1480]: time="2025-09-05T23:57:51.192986993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:51.193131 containerd[1480]: time="2025-09-05T23:57:51.192998032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.194332 containerd[1480]: time="2025-09-05T23:57:51.193092305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:51.202343 kubelet[2245]: E0905 23:57:51.202192 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://138.199.175.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:57:51.217030 systemd[1]: Started cri-containerd-26bbc456be6b8e0444d187b9d3fb1a6a5149db1e59eeff2de439060ffd973338.scope - libcontainer container 26bbc456be6b8e0444d187b9d3fb1a6a5149db1e59eeff2de439060ffd973338. Sep 5 23:57:51.224370 systemd[1]: Started cri-containerd-abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c.scope - libcontainer container abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c. Sep 5 23:57:51.237619 systemd[1]: Started cri-containerd-f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f.scope - libcontainer container f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f. Sep 5 23:57:51.288890 containerd[1480]: time="2025-09-05T23:57:51.288628958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-4ef3874a70,Uid:54e3e6c9d7915cef527fc424824822a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f\"" Sep 5 23:57:51.296796 containerd[1480]: time="2025-09-05T23:57:51.296729616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-4ef3874a70,Uid:7e210808890da72ee3e94554af8c04e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c\"" Sep 5 23:57:51.300323 containerd[1480]: time="2025-09-05T23:57:51.300077673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-4ef3874a70,Uid:e541c017e1943efa824c8b3337db0933,Namespace:kube-system,Attempt:0,} returns sandbox id \"26bbc456be6b8e0444d187b9d3fb1a6a5149db1e59eeff2de439060ffd973338\"" Sep 5 23:57:51.304821 containerd[1480]: time="2025-09-05T23:57:51.304773959Z" level=info msg="CreateContainer within sandbox \"abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:57:51.306471 containerd[1480]: time="2025-09-05T23:57:51.306434168Z" level=info msg="CreateContainer within sandbox \"f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:57:51.308169 containerd[1480]: time="2025-09-05T23:57:51.308001703Z" level=info msg="CreateContainer within sandbox \"26bbc456be6b8e0444d187b9d3fb1a6a5149db1e59eeff2de439060ffd973338\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:57:51.324107 containerd[1480]: time="2025-09-05T23:57:51.323878521Z" level=info msg="CreateContainer within sandbox \"abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3\"" Sep 5 23:57:51.326465 containerd[1480]: time="2025-09-05T23:57:51.325116598Z" level=info msg="StartContainer for \"c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3\"" Sep 5 23:57:51.331870 containerd[1480]: time="2025-09-05T23:57:51.331731156Z" level=info msg="CreateContainer within sandbox \"f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af\"" Sep 5 23:57:51.333448 containerd[1480]: time="2025-09-05T23:57:51.332834842Z" level=info msg="StartContainer for \"a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af\"" Sep 5 23:57:51.336393 containerd[1480]: time="2025-09-05T23:57:51.336355327Z" level=info msg="CreateContainer within sandbox \"26bbc456be6b8e0444d187b9d3fb1a6a5149db1e59eeff2de439060ffd973338\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c13505b977205db203ee3df2bee99894c6088dc8eeda474ab20f2cc8606f1a33\"" Sep 5 23:57:51.337650 containerd[1480]: time="2025-09-05T23:57:51.337620203Z" level=info msg="StartContainer for \"c13505b977205db203ee3df2bee99894c6088dc8eeda474ab20f2cc8606f1a33\"" Sep 5 23:57:51.357639 systemd[1]: Started cri-containerd-c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3.scope - libcontainer container c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3. Sep 5 23:57:51.387678 systemd[1]: Started cri-containerd-a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af.scope - libcontainer container a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af. Sep 5 23:57:51.396186 systemd[1]: Started cri-containerd-c13505b977205db203ee3df2bee99894c6088dc8eeda474ab20f2cc8606f1a33.scope - libcontainer container c13505b977205db203ee3df2bee99894c6088dc8eeda474ab20f2cc8606f1a33. Sep 5 23:57:51.420870 containerd[1480]: time="2025-09-05T23:57:51.420637172Z" level=info msg="StartContainer for \"c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3\" returns successfully" Sep 5 23:57:51.447902 kubelet[2245]: E0905 23:57:51.447660 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://138.199.175.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-4ef3874a70&limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:57:51.449656 kubelet[2245]: E0905 23:57:51.449044 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://138.199.175.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:57:51.454197 kubelet[2245]: E0905 23:57:51.454010 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": dial tcp 138.199.175.7:6443: connect: connection refused" interval="1.6s" Sep 5 23:57:51.461111 containerd[1480]: time="2025-09-05T23:57:51.460739771Z" level=info msg="StartContainer for \"a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af\" returns successfully" Sep 5 23:57:51.462068 kubelet[2245]: E0905 23:57:51.462005 2245 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://138.199.175.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.175.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:57:51.470034 containerd[1480]: time="2025-09-05T23:57:51.469927317Z" level=info msg="StartContainer for \"c13505b977205db203ee3df2bee99894c6088dc8eeda474ab20f2cc8606f1a33\" returns successfully" Sep 5 23:57:51.648907 kubelet[2245]: I0905 23:57:51.648872 2245 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:52.089951 kubelet[2245]: E0905 23:57:52.089920 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:52.093879 kubelet[2245]: E0905 23:57:52.093702 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:52.096440 kubelet[2245]: E0905 23:57:52.095303 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:53.099482 kubelet[2245]: E0905 23:57:53.097347 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:53.099482 kubelet[2245]: E0905 23:57:53.097794 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.101395 kubelet[2245]: E0905 23:57:54.101064 2245 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.707914 kubelet[2245]: E0905 23:57:54.707869 2245 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-4ef3874a70\" not found" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.799861 kubelet[2245]: I0905 23:57:54.799819 2245 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.799861 kubelet[2245]: E0905 23:57:54.799867 2245 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-4ef3874a70\": node \"ci-4081-3-5-n-4ef3874a70\" not found" Sep 5 23:57:54.842447 kubelet[2245]: I0905 23:57:54.840394 2245 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.853174 kubelet[2245]: E0905 23:57:54.853136 2245 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-4ef3874a70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.853174 kubelet[2245]: I0905 23:57:54.853173 2245 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.856503 kubelet[2245]: E0905 23:57:54.856456 2245 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.856503 kubelet[2245]: I0905 23:57:54.856492 2245 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:54.858832 kubelet[2245]: E0905 23:57:54.858759 2245 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:55.024014 kubelet[2245]: I0905 23:57:55.023252 2245 apiserver.go:52] "Watching apiserver" Sep 5 23:57:55.044750 kubelet[2245]: I0905 23:57:55.044527 2245 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:57:56.188655 kubelet[2245]: I0905 23:57:56.188390 2245 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:57.182260 systemd[1]: Reloading requested from client PID 2529 ('systemctl') (unit session-7.scope)... Sep 5 23:57:57.182280 systemd[1]: Reloading... Sep 5 23:57:57.324449 zram_generator::config[2581]: No configuration found. Sep 5 23:57:57.440230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:57:57.530310 systemd[1]: Reloading finished in 347 ms. Sep 5 23:57:57.573493 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:57.593310 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:57:57.593906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:57.594015 systemd[1]: kubelet.service: Consumed 1.288s CPU time, 129.3M memory peak, 0B memory swap peak. Sep 5 23:57:57.604970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:57:57.753675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:57:57.754782 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:57:57.810843 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:57:57.810843 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:57:57.810843 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:57:57.811241 kubelet[2614]: I0905 23:57:57.810889 2614 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:57:57.826473 kubelet[2614]: I0905 23:57:57.825800 2614 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:57:57.826473 kubelet[2614]: I0905 23:57:57.825836 2614 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:57:57.826473 kubelet[2614]: I0905 23:57:57.826067 2614 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:57:57.827514 kubelet[2614]: I0905 23:57:57.827462 2614 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 23:57:57.830052 kubelet[2614]: I0905 23:57:57.830004 2614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:57:57.836472 kubelet[2614]: E0905 23:57:57.836194 2614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:57:57.836472 kubelet[2614]: I0905 23:57:57.836236 2614 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:57:57.840399 kubelet[2614]: I0905 23:57:57.840370 2614 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:57:57.840878 kubelet[2614]: I0905 23:57:57.840849 2614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:57:57.841225 kubelet[2614]: I0905 23:57:57.840962 2614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-4ef3874a70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.843712 2614 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.843745 2614 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.843809 2614 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.844005 2614 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.844019 2614 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:57:57.844061 kubelet[2614]: I0905 23:57:57.844046 2614 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:57:57.844335 kubelet[2614]: I0905 23:57:57.844319 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:57:57.850440 kubelet[2614]: I0905 23:57:57.849324 2614 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:57:57.853437 kubelet[2614]: I0905 23:57:57.851328 2614 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:57:57.862450 kubelet[2614]: I0905 23:57:57.861772 2614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:57:57.862450 kubelet[2614]: I0905 23:57:57.861823 2614 server.go:1289] "Started kubelet" Sep 5 23:57:57.865266 kubelet[2614]: I0905 23:57:57.865235 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:57:57.881505 kubelet[2614]: I0905 23:57:57.880252 2614 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:57:57.881505 kubelet[2614]: I0905 23:57:57.881166 2614 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:57:57.887472 kubelet[2614]: I0905 23:57:57.886445 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:57:57.887472 kubelet[2614]: I0905 23:57:57.886694 2614 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:57:57.887472 kubelet[2614]: I0905 23:57:57.886907 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:57:57.891447 kubelet[2614]: I0905 23:57:57.890504 2614 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:57:57.896383 kubelet[2614]: I0905 23:57:57.896333 2614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:57:57.899046 kubelet[2614]: I0905 23:57:57.897484 2614 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:57:57.901271 kubelet[2614]: I0905 23:57:57.901218 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:57:57.903443 kubelet[2614]: I0905 23:57:57.903393 2614 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:57:57.903670 kubelet[2614]: I0905 23:57:57.903650 2614 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:57:57.904016 kubelet[2614]: I0905 23:57:57.903974 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:57:57.904016 kubelet[2614]: I0905 23:57:57.904004 2614 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:57:57.904094 kubelet[2614]: I0905 23:57:57.904025 2614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:57:57.904094 kubelet[2614]: I0905 23:57:57.904032 2614 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:57:57.904094 kubelet[2614]: E0905 23:57:57.904074 2614 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:57:57.910529 kubelet[2614]: E0905 23:57:57.910493 2614 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:57:57.911022 kubelet[2614]: I0905 23:57:57.910995 2614 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:57:57.988439 kubelet[2614]: I0905 23:57:57.988392 2614 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:57:57.989217 kubelet[2614]: I0905 23:57:57.989193 2614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:57:57.989381 kubelet[2614]: I0905 23:57:57.989370 2614 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:57:57.989693 kubelet[2614]: I0905 23:57:57.989676 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:57:57.989785 kubelet[2614]: I0905 23:57:57.989759 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:57:57.990036 kubelet[2614]: I0905 23:57:57.989837 2614 policy_none.go:49] "None policy: Start" Sep 5 23:57:57.990036 kubelet[2614]: I0905 23:57:57.989853 2614 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:57:57.990036 kubelet[2614]: I0905 23:57:57.989864 2614 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:57:57.990036 kubelet[2614]: I0905 23:57:57.989960 2614 state_mem.go:75] "Updated machine memory state" Sep 5 23:57:57.995304 kubelet[2614]: E0905 23:57:57.995271 2614 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:57:57.996464 kubelet[2614]: I0905 23:57:57.995642 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:57:57.996464 kubelet[2614]: I0905 23:57:57.995663 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:57:57.996464 kubelet[2614]: I0905 23:57:57.996035 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:57:57.998599 kubelet[2614]: E0905 23:57:57.998570 2614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:57:58.005916 kubelet[2614]: I0905 23:57:58.005804 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.008873 kubelet[2614]: I0905 23:57:58.006482 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.008873 kubelet[2614]: I0905 23:57:58.006598 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.023900 kubelet[2614]: E0905 23:57:58.023831 2614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.098838 kubelet[2614]: I0905 23:57:58.098674 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.098838 kubelet[2614]: I0905 23:57:58.098742 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.098838 kubelet[2614]: I0905 23:57:58.098781 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.098838 kubelet[2614]: I0905 23:57:58.098812 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.098838 kubelet[2614]: I0905 23:57:58.098845 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e210808890da72ee3e94554af8c04e0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-4ef3874a70\" (UID: \"7e210808890da72ee3e94554af8c04e0\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.099171 kubelet[2614]: I0905 23:57:58.098872 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e541c017e1943efa824c8b3337db0933-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" (UID: \"e541c017e1943efa824c8b3337db0933\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.099171 kubelet[2614]: I0905 23:57:58.098899 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.099171 kubelet[2614]: I0905 23:57:58.098927 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.099171 kubelet[2614]: I0905 23:57:58.098956 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54e3e6c9d7915cef527fc424824822a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-4ef3874a70\" (UID: \"54e3e6c9d7915cef527fc424824822a9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.109570 kubelet[2614]: I0905 23:57:58.109375 2614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.122876 kubelet[2614]: I0905 23:57:58.122492 2614 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.122876 kubelet[2614]: I0905 23:57:58.122595 2614 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.845963 kubelet[2614]: I0905 23:57:58.845669 2614 apiserver.go:52] "Watching apiserver" Sep 5 23:57:58.897563 kubelet[2614]: I0905 23:57:58.897501 2614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:57:58.930848 kubelet[2614]: I0905 23:57:58.930551 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-4ef3874a70" podStartSLOduration=0.930533092 podStartE2EDuration="930.533092ms" podCreationTimestamp="2025-09-05 23:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:57:58.911235824 +0000 UTC m=+1.150388173" watchObservedRunningTime="2025-09-05 23:57:58.930533092 +0000 UTC m=+1.169685481" Sep 5 23:57:58.941977 kubelet[2614]: I0905 23:57:58.941796 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.950534 kubelet[2614]: E0905 23:57:58.950456 2614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-4ef3874a70\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" Sep 5 23:57:58.953692 kubelet[2614]: I0905 23:57:58.953481 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-4ef3874a70" podStartSLOduration=2.953464105 podStartE2EDuration="2.953464105s" podCreationTimestamp="2025-09-05 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:57:58.930936993 +0000 UTC m=+1.170089382" watchObservedRunningTime="2025-09-05 23:57:58.953464105 +0000 UTC m=+1.192616494" Sep 5 23:57:58.967717 kubelet[2614]: I0905 23:57:58.967567 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" podStartSLOduration=0.967547985 podStartE2EDuration="967.547985ms" podCreationTimestamp="2025-09-05 23:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:57:58.954221468 +0000 UTC m=+1.193373857" watchObservedRunningTime="2025-09-05 23:57:58.967547985 +0000 UTC m=+1.206700374" Sep 5 23:58:02.217992 kubelet[2614]: I0905 23:58:02.217802 2614 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:58:02.219952 kubelet[2614]: I0905 23:58:02.218846 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:58:02.219988 containerd[1480]: time="2025-09-05T23:58:02.218400475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:58:02.579023 systemd[1]: Created slice kubepods-besteffort-pod68d7cb8a_ba64_416d_8fda_d0c3bf9d2822.slice - libcontainer container kubepods-besteffort-pod68d7cb8a_ba64_416d_8fda_d0c3bf9d2822.slice. Sep 5 23:58:02.623675 kubelet[2614]: I0905 23:58:02.623614 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-kube-proxy\") pod \"kube-proxy-htgr2\" (UID: \"68d7cb8a-ba64-416d-8fda-d0c3bf9d2822\") " pod="kube-system/kube-proxy-htgr2" Sep 5 23:58:02.623675 kubelet[2614]: I0905 23:58:02.623695 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-lib-modules\") pod \"kube-proxy-htgr2\" (UID: \"68d7cb8a-ba64-416d-8fda-d0c3bf9d2822\") " pod="kube-system/kube-proxy-htgr2" Sep 5 23:58:02.624030 kubelet[2614]: I0905 23:58:02.623745 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-xtables-lock\") pod \"kube-proxy-htgr2\" (UID: \"68d7cb8a-ba64-416d-8fda-d0c3bf9d2822\") " pod="kube-system/kube-proxy-htgr2" Sep 5 23:58:02.624030 kubelet[2614]: I0905 23:58:02.623783 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6jq\" (UniqueName: \"kubernetes.io/projected/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-kube-api-access-2z6jq\") pod \"kube-proxy-htgr2\" (UID: \"68d7cb8a-ba64-416d-8fda-d0c3bf9d2822\") " pod="kube-system/kube-proxy-htgr2" Sep 5 23:58:02.736528 kubelet[2614]: E0905 23:58:02.736442 2614 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 23:58:02.736528 kubelet[2614]: E0905 23:58:02.736479 2614 projected.go:194] Error preparing data for projected volume kube-api-access-2z6jq for pod kube-system/kube-proxy-htgr2: configmap "kube-root-ca.crt" not found Sep 5 23:58:02.736768 kubelet[2614]: E0905 23:58:02.736545 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-kube-api-access-2z6jq podName:68d7cb8a-ba64-416d-8fda-d0c3bf9d2822 nodeName:}" failed. No retries permitted until 2025-09-05 23:58:03.236522332 +0000 UTC m=+5.475674721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2z6jq" (UniqueName: "kubernetes.io/projected/68d7cb8a-ba64-416d-8fda-d0c3bf9d2822-kube-api-access-2z6jq") pod "kube-proxy-htgr2" (UID: "68d7cb8a-ba64-416d-8fda-d0c3bf9d2822") : configmap "kube-root-ca.crt" not found Sep 5 23:58:03.404776 systemd[1]: Created slice kubepods-besteffort-pod4fcac0f3_c46b_4f92_8ccf_7c7226bbb022.slice - libcontainer container kubepods-besteffort-pod4fcac0f3_c46b_4f92_8ccf_7c7226bbb022.slice. Sep 5 23:58:03.487283 containerd[1480]: time="2025-09-05T23:58:03.487210921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htgr2,Uid:68d7cb8a-ba64-416d-8fda-d0c3bf9d2822,Namespace:kube-system,Attempt:0,}" Sep 5 23:58:03.529499 kubelet[2614]: I0905 23:58:03.529305 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrrh\" (UniqueName: \"kubernetes.io/projected/4fcac0f3-c46b-4f92-8ccf-7c7226bbb022-kube-api-access-ttrrh\") pod \"tigera-operator-755d956888-495rq\" (UID: \"4fcac0f3-c46b-4f92-8ccf-7c7226bbb022\") " pod="tigera-operator/tigera-operator-755d956888-495rq" Sep 5 23:58:03.529499 kubelet[2614]: I0905 23:58:03.529395 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4fcac0f3-c46b-4f92-8ccf-7c7226bbb022-var-lib-calico\") pod \"tigera-operator-755d956888-495rq\" (UID: \"4fcac0f3-c46b-4f92-8ccf-7c7226bbb022\") " pod="tigera-operator/tigera-operator-755d956888-495rq" Sep 5 23:58:03.531001 containerd[1480]: time="2025-09-05T23:58:03.530370144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:03.531478 containerd[1480]: time="2025-09-05T23:58:03.531277269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:03.531478 containerd[1480]: time="2025-09-05T23:58:03.531301668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:03.531478 containerd[1480]: time="2025-09-05T23:58:03.531417023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:03.550675 systemd[1]: Started cri-containerd-828374c03c69f5343c01486f746eb5e2d106ac5e10df3fcf61814e24a27f1f1d.scope - libcontainer container 828374c03c69f5343c01486f746eb5e2d106ac5e10df3fcf61814e24a27f1f1d. Sep 5 23:58:03.576254 containerd[1480]: time="2025-09-05T23:58:03.575620165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htgr2,Uid:68d7cb8a-ba64-416d-8fda-d0c3bf9d2822,Namespace:kube-system,Attempt:0,} returns sandbox id \"828374c03c69f5343c01486f746eb5e2d106ac5e10df3fcf61814e24a27f1f1d\"" Sep 5 23:58:03.583470 containerd[1480]: time="2025-09-05T23:58:03.583414699Z" level=info msg="CreateContainer within sandbox \"828374c03c69f5343c01486f746eb5e2d106ac5e10df3fcf61814e24a27f1f1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:58:03.595864 containerd[1480]: time="2025-09-05T23:58:03.595715015Z" level=info msg="CreateContainer within sandbox \"828374c03c69f5343c01486f746eb5e2d106ac5e10df3fcf61814e24a27f1f1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0cf07b09f98f2dcb126be823c17629544237deb5f5cf345da1499a0da24857bb\"" Sep 5 23:58:03.596971 containerd[1480]: time="2025-09-05T23:58:03.596925928Z" level=info msg="StartContainer for \"0cf07b09f98f2dcb126be823c17629544237deb5f5cf345da1499a0da24857bb\"" Sep 5 23:58:03.627666 systemd[1]: Started cri-containerd-0cf07b09f98f2dcb126be823c17629544237deb5f5cf345da1499a0da24857bb.scope - libcontainer container 0cf07b09f98f2dcb126be823c17629544237deb5f5cf345da1499a0da24857bb. Sep 5 23:58:03.663016 containerd[1480]: time="2025-09-05T23:58:03.662908254Z" level=info msg="StartContainer for \"0cf07b09f98f2dcb126be823c17629544237deb5f5cf345da1499a0da24857bb\" returns successfully" Sep 5 23:58:03.709047 containerd[1480]: time="2025-09-05T23:58:03.709002441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-495rq,Uid:4fcac0f3-c46b-4f92-8ccf-7c7226bbb022,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:58:03.733524 containerd[1480]: time="2025-09-05T23:58:03.732569395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:03.733524 containerd[1480]: time="2025-09-05T23:58:03.732635112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:03.733524 containerd[1480]: time="2025-09-05T23:58:03.732837584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:03.733524 containerd[1480]: time="2025-09-05T23:58:03.732948060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:03.760756 systemd[1]: Started cri-containerd-066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa.scope - libcontainer container 066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa. Sep 5 23:58:03.800577 containerd[1480]: time="2025-09-05T23:58:03.800535123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-495rq,Uid:4fcac0f3-c46b-4f92-8ccf-7c7226bbb022,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\"" Sep 5 23:58:03.803456 containerd[1480]: time="2025-09-05T23:58:03.803389930Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:58:05.324461 kubelet[2614]: I0905 23:58:05.324248 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-htgr2" podStartSLOduration=3.324225811 podStartE2EDuration="3.324225811s" podCreationTimestamp="2025-09-05 23:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:58:03.971367966 +0000 UTC m=+6.210520355" watchObservedRunningTime="2025-09-05 23:58:05.324225811 +0000 UTC m=+7.563378200" Sep 5 23:58:05.676855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127091594.mount: Deactivated successfully. Sep 5 23:58:06.084995 containerd[1480]: time="2025-09-05T23:58:06.084148168Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:06.085483 containerd[1480]: time="2025-09-05T23:58:06.085453882Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:58:06.086072 containerd[1480]: time="2025-09-05T23:58:06.086046582Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:06.090973 containerd[1480]: time="2025-09-05T23:58:06.090896291Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:06.091843 containerd[1480]: time="2025-09-05T23:58:06.091807059Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.288354891s" Sep 5 23:58:06.091953 containerd[1480]: time="2025-09-05T23:58:06.091928455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:58:06.097157 containerd[1480]: time="2025-09-05T23:58:06.097094953Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:58:06.110927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3315044409.mount: Deactivated successfully. Sep 5 23:58:06.120318 containerd[1480]: time="2025-09-05T23:58:06.120152982Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302\"" Sep 5 23:58:06.121368 containerd[1480]: time="2025-09-05T23:58:06.121310781Z" level=info msg="StartContainer for \"9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302\"" Sep 5 23:58:06.153684 systemd[1]: Started cri-containerd-9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302.scope - libcontainer container 9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302. Sep 5 23:58:06.181010 containerd[1480]: time="2025-09-05T23:58:06.180959564Z" level=info msg="StartContainer for \"9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302\" returns successfully" Sep 5 23:58:06.979440 kubelet[2614]: I0905 23:58:06.979016 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-495rq" podStartSLOduration=1.68862264 podStartE2EDuration="3.978998217s" podCreationTimestamp="2025-09-05 23:58:03 +0000 UTC" firstStartedPulling="2025-09-05 23:58:03.802735716 +0000 UTC m=+6.041888105" lastFinishedPulling="2025-09-05 23:58:06.093111293 +0000 UTC m=+8.332263682" observedRunningTime="2025-09-05 23:58:06.978933419 +0000 UTC m=+9.218085808" watchObservedRunningTime="2025-09-05 23:58:06.978998217 +0000 UTC m=+9.218150606" Sep 5 23:58:12.250841 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 5 23:58:12.415733 sshd[1748]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:12.421767 systemd[1]: sshd@6-138.199.175.7:22-139.178.68.195:57412.service: Deactivated successfully. Sep 5 23:58:12.426537 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:58:12.428817 systemd[1]: session-7.scope: Consumed 7.682s CPU time, 149.6M memory peak, 0B memory swap peak. Sep 5 23:58:12.433734 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:58:12.435054 systemd-logind[1457]: Removed session 7. Sep 5 23:58:18.723786 systemd[1]: Created slice kubepods-besteffort-poda5a71fbe_1944_4073_b937_1df44f5d3d05.slice - libcontainer container kubepods-besteffort-poda5a71fbe_1944_4073_b937_1df44f5d3d05.slice. Sep 5 23:58:18.734498 kubelet[2614]: I0905 23:58:18.734054 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a71fbe-1944-4073-b937-1df44f5d3d05-tigera-ca-bundle\") pod \"calico-typha-9d85d494f-wmr95\" (UID: \"a5a71fbe-1944-4073-b937-1df44f5d3d05\") " pod="calico-system/calico-typha-9d85d494f-wmr95" Sep 5 23:58:18.734498 kubelet[2614]: I0905 23:58:18.734101 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a5a71fbe-1944-4073-b937-1df44f5d3d05-typha-certs\") pod \"calico-typha-9d85d494f-wmr95\" (UID: \"a5a71fbe-1944-4073-b937-1df44f5d3d05\") " pod="calico-system/calico-typha-9d85d494f-wmr95" Sep 5 23:58:18.734498 kubelet[2614]: I0905 23:58:18.734320 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b59p4\" (UniqueName: \"kubernetes.io/projected/a5a71fbe-1944-4073-b937-1df44f5d3d05-kube-api-access-b59p4\") pod \"calico-typha-9d85d494f-wmr95\" (UID: \"a5a71fbe-1944-4073-b937-1df44f5d3d05\") " pod="calico-system/calico-typha-9d85d494f-wmr95" Sep 5 23:58:18.896863 systemd[1]: Created slice kubepods-besteffort-pod16b9764e_dfbc_4d43_95b6_b8dca9d67dd0.slice - libcontainer container kubepods-besteffort-pod16b9764e_dfbc_4d43_95b6_b8dca9d67dd0.slice. Sep 5 23:58:18.935825 kubelet[2614]: I0905 23:58:18.935777 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-policysync\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.935825 kubelet[2614]: I0905 23:58:18.935831 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-xtables-lock\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.935825 kubelet[2614]: I0905 23:58:18.935848 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-var-lib-calico\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936092 kubelet[2614]: I0905 23:58:18.935914 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-cni-bin-dir\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936092 kubelet[2614]: I0905 23:58:18.935930 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-cni-log-dir\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936092 kubelet[2614]: I0905 23:58:18.935944 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-cni-net-dir\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936092 kubelet[2614]: I0905 23:58:18.935966 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-flexvol-driver-host\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936092 kubelet[2614]: I0905 23:58:18.935982 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2lb\" (UniqueName: \"kubernetes.io/projected/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-kube-api-access-xx2lb\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936200 kubelet[2614]: I0905 23:58:18.935998 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-var-run-calico\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936200 kubelet[2614]: I0905 23:58:18.936013 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-lib-modules\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936200 kubelet[2614]: I0905 23:58:18.936029 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-node-certs\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:18.936200 kubelet[2614]: I0905 23:58:18.936044 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16b9764e-dfbc-4d43-95b6-b8dca9d67dd0-tigera-ca-bundle\") pod \"calico-node-p7bdn\" (UID: \"16b9764e-dfbc-4d43-95b6-b8dca9d67dd0\") " pod="calico-system/calico-node-p7bdn" Sep 5 23:58:19.033339 containerd[1480]: time="2025-09-05T23:58:19.032853603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d85d494f-wmr95,Uid:a5a71fbe-1944-4073-b937-1df44f5d3d05,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:19.048119 kubelet[2614]: E0905 23:58:19.047848 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.048119 kubelet[2614]: W0905 23:58:19.047976 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.048571 kubelet[2614]: E0905 23:58:19.048022 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.075069 kubelet[2614]: E0905 23:58:19.075009 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.075069 kubelet[2614]: W0905 23:58:19.075053 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.075261 kubelet[2614]: E0905 23:58:19.075092 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.084959 containerd[1480]: time="2025-09-05T23:58:19.084322200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:19.084959 containerd[1480]: time="2025-09-05T23:58:19.084386039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:19.084959 containerd[1480]: time="2025-09-05T23:58:19.084496676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:19.084959 containerd[1480]: time="2025-09-05T23:58:19.084594274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:19.098228 kubelet[2614]: E0905 23:58:19.097930 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:19.121902 kubelet[2614]: E0905 23:58:19.121861 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.121902 kubelet[2614]: W0905 23:58:19.121895 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.121902 kubelet[2614]: E0905 23:58:19.121917 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.122099 kubelet[2614]: E0905 23:58:19.122084 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.122145 kubelet[2614]: W0905 23:58:19.122092 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.122145 kubelet[2614]: E0905 23:58:19.122139 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.122367 kubelet[2614]: E0905 23:58:19.122341 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.122367 kubelet[2614]: W0905 23:58:19.122356 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.122367 kubelet[2614]: E0905 23:58:19.122366 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.122589 kubelet[2614]: E0905 23:58:19.122549 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.122589 kubelet[2614]: W0905 23:58:19.122564 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.123382 kubelet[2614]: E0905 23:58:19.122574 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.122864 systemd[1]: Started cri-containerd-81d5aff29c9dca3767e41e027fc4300454080e78bea70a2d78c17dacb3b1a9c6.scope - libcontainer container 81d5aff29c9dca3767e41e027fc4300454080e78bea70a2d78c17dacb3b1a9c6. Sep 5 23:58:19.123764 kubelet[2614]: E0905 23:58:19.123731 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.123764 kubelet[2614]: W0905 23:58:19.123752 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.123764 kubelet[2614]: E0905 23:58:19.123768 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.124614 kubelet[2614]: E0905 23:58:19.124580 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.124614 kubelet[2614]: W0905 23:58:19.124601 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.124614 kubelet[2614]: E0905 23:58:19.124613 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.125232 kubelet[2614]: E0905 23:58:19.125198 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.125232 kubelet[2614]: W0905 23:58:19.125228 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.125332 kubelet[2614]: E0905 23:58:19.125240 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.125831 kubelet[2614]: E0905 23:58:19.125793 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.125831 kubelet[2614]: W0905 23:58:19.125814 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.125831 kubelet[2614]: E0905 23:58:19.125826 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.126584 kubelet[2614]: E0905 23:58:19.126553 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.126584 kubelet[2614]: W0905 23:58:19.126573 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.127281 kubelet[2614]: E0905 23:58:19.126585 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.127769 kubelet[2614]: E0905 23:58:19.127735 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.127769 kubelet[2614]: W0905 23:58:19.127770 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.127863 kubelet[2614]: E0905 23:58:19.127784 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.128069 kubelet[2614]: E0905 23:58:19.128044 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.128069 kubelet[2614]: W0905 23:58:19.128060 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.128069 kubelet[2614]: E0905 23:58:19.128070 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.128623 kubelet[2614]: E0905 23:58:19.128583 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.128623 kubelet[2614]: W0905 23:58:19.128617 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.128712 kubelet[2614]: E0905 23:58:19.128629 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.128901 kubelet[2614]: E0905 23:58:19.128875 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.128901 kubelet[2614]: W0905 23:58:19.128891 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.128901 kubelet[2614]: E0905 23:58:19.128901 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.129599 kubelet[2614]: E0905 23:58:19.129571 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.129599 kubelet[2614]: W0905 23:58:19.129591 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.129690 kubelet[2614]: E0905 23:58:19.129620 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.130437 kubelet[2614]: E0905 23:58:19.130386 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.130437 kubelet[2614]: W0905 23:58:19.130399 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.130437 kubelet[2614]: E0905 23:58:19.130410 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.130926 kubelet[2614]: E0905 23:58:19.130782 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.130926 kubelet[2614]: W0905 23:58:19.130916 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.130926 kubelet[2614]: E0905 23:58:19.130928 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.131694 kubelet[2614]: E0905 23:58:19.131510 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.131694 kubelet[2614]: W0905 23:58:19.131552 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.131694 kubelet[2614]: E0905 23:58:19.131569 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.131909 kubelet[2614]: E0905 23:58:19.131824 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.131909 kubelet[2614]: W0905 23:58:19.131840 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.131909 kubelet[2614]: E0905 23:58:19.131851 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.133158 kubelet[2614]: E0905 23:58:19.133122 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.133158 kubelet[2614]: W0905 23:58:19.133142 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.133158 kubelet[2614]: E0905 23:58:19.133155 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.134518 kubelet[2614]: E0905 23:58:19.134483 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.134518 kubelet[2614]: W0905 23:58:19.134506 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.134518 kubelet[2614]: E0905 23:58:19.134522 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.138280 kubelet[2614]: E0905 23:58:19.138190 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.138280 kubelet[2614]: W0905 23:58:19.138260 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.138280 kubelet[2614]: E0905 23:58:19.138281 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.138886 kubelet[2614]: I0905 23:58:19.138312 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b841a2a5-b2e2-4dd3-a133-b08f780b324f-kubelet-dir\") pod \"csi-node-driver-7tcpm\" (UID: \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\") " pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:19.139545 kubelet[2614]: E0905 23:58:19.139505 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.139545 kubelet[2614]: W0905 23:58:19.139540 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.139639 kubelet[2614]: E0905 23:58:19.139560 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.139639 kubelet[2614]: I0905 23:58:19.139598 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b841a2a5-b2e2-4dd3-a133-b08f780b324f-registration-dir\") pod \"csi-node-driver-7tcpm\" (UID: \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\") " pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:19.139908 kubelet[2614]: E0905 23:58:19.139889 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.139908 kubelet[2614]: W0905 23:58:19.139905 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.139981 kubelet[2614]: E0905 23:58:19.139916 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.139981 kubelet[2614]: I0905 23:58:19.139942 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whg4\" (UniqueName: \"kubernetes.io/projected/b841a2a5-b2e2-4dd3-a133-b08f780b324f-kube-api-access-8whg4\") pod \"csi-node-driver-7tcpm\" (UID: \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\") " pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:19.140190 kubelet[2614]: E0905 23:58:19.140164 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.140190 kubelet[2614]: W0905 23:58:19.140182 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.140274 kubelet[2614]: E0905 23:58:19.140192 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.140376 kubelet[2614]: I0905 23:58:19.140333 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b841a2a5-b2e2-4dd3-a133-b08f780b324f-varrun\") pod \"csi-node-driver-7tcpm\" (UID: \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\") " pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:19.141774 kubelet[2614]: E0905 23:58:19.141741 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.141774 kubelet[2614]: W0905 23:58:19.141767 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.141883 kubelet[2614]: E0905 23:58:19.141782 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.142005 kubelet[2614]: E0905 23:58:19.141990 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.142005 kubelet[2614]: W0905 23:58:19.142003 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.142077 kubelet[2614]: E0905 23:58:19.142013 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.142282 kubelet[2614]: E0905 23:58:19.142258 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.142282 kubelet[2614]: W0905 23:58:19.142276 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.142356 kubelet[2614]: E0905 23:58:19.142288 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.142512 kubelet[2614]: E0905 23:58:19.142496 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.142512 kubelet[2614]: W0905 23:58:19.142512 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.142597 kubelet[2614]: E0905 23:58:19.142522 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.142928 kubelet[2614]: I0905 23:58:19.142623 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b841a2a5-b2e2-4dd3-a133-b08f780b324f-socket-dir\") pod \"csi-node-driver-7tcpm\" (UID: \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\") " pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:19.142928 kubelet[2614]: E0905 23:58:19.142739 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.142928 kubelet[2614]: W0905 23:58:19.142747 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.142928 kubelet[2614]: E0905 23:58:19.142755 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.143566 kubelet[2614]: E0905 23:58:19.143533 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.143566 kubelet[2614]: W0905 23:58:19.143558 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.143566 kubelet[2614]: E0905 23:58:19.143572 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.143818 kubelet[2614]: E0905 23:58:19.143803 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.143818 kubelet[2614]: W0905 23:58:19.143814 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.143895 kubelet[2614]: E0905 23:58:19.143824 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.143993 kubelet[2614]: E0905 23:58:19.143979 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.143993 kubelet[2614]: W0905 23:58:19.143991 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.144057 kubelet[2614]: E0905 23:58:19.143999 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.144182 kubelet[2614]: E0905 23:58:19.144166 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.144182 kubelet[2614]: W0905 23:58:19.144177 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.144255 kubelet[2614]: E0905 23:58:19.144186 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.145556 kubelet[2614]: E0905 23:58:19.145513 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.145556 kubelet[2614]: W0905 23:58:19.145550 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.145671 kubelet[2614]: E0905 23:58:19.145564 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.146487 kubelet[2614]: E0905 23:58:19.146463 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.146487 kubelet[2614]: W0905 23:58:19.146481 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.146627 kubelet[2614]: E0905 23:58:19.146496 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.205791 containerd[1480]: time="2025-09-05T23:58:19.205742548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p7bdn,Uid:16b9764e-dfbc-4d43-95b6-b8dca9d67dd0,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:19.233478 containerd[1480]: time="2025-09-05T23:58:19.233145806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d85d494f-wmr95,Uid:a5a71fbe-1944-4073-b937-1df44f5d3d05,Namespace:calico-system,Attempt:0,} returns sandbox id \"81d5aff29c9dca3767e41e027fc4300454080e78bea70a2d78c17dacb3b1a9c6\"" Sep 5 23:58:19.239352 containerd[1480]: time="2025-09-05T23:58:19.239173381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:58:19.248690 kubelet[2614]: E0905 23:58:19.248593 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.248690 kubelet[2614]: W0905 23:58:19.248626 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.248690 kubelet[2614]: E0905 23:58:19.248648 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.250663 kubelet[2614]: E0905 23:58:19.250628 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.250663 kubelet[2614]: W0905 23:58:19.250654 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.250663 kubelet[2614]: E0905 23:58:19.250675 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.253085 kubelet[2614]: E0905 23:58:19.252987 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.253662 kubelet[2614]: W0905 23:58:19.253104 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.253662 kubelet[2614]: E0905 23:58:19.253129 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.253962 kubelet[2614]: E0905 23:58:19.253933 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.253962 kubelet[2614]: W0905 23:58:19.253959 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.254158 kubelet[2614]: E0905 23:58:19.253978 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.256474 kubelet[2614]: E0905 23:58:19.255668 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.256474 kubelet[2614]: W0905 23:58:19.255694 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.256474 kubelet[2614]: E0905 23:58:19.255713 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.257210 kubelet[2614]: E0905 23:58:19.257175 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.257283 kubelet[2614]: W0905 23:58:19.257194 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.257283 kubelet[2614]: E0905 23:58:19.257252 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.257691 kubelet[2614]: E0905 23:58:19.257664 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.257691 kubelet[2614]: W0905 23:58:19.257681 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.257691 kubelet[2614]: E0905 23:58:19.257693 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.259065 kubelet[2614]: E0905 23:58:19.258967 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.259065 kubelet[2614]: W0905 23:58:19.258987 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.259568 kubelet[2614]: E0905 23:58:19.259166 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.260589 kubelet[2614]: E0905 23:58:19.260554 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.260589 kubelet[2614]: W0905 23:58:19.260579 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.260693 kubelet[2614]: E0905 23:58:19.260595 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.262296 kubelet[2614]: E0905 23:58:19.262258 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.262296 kubelet[2614]: W0905 23:58:19.262284 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.262678 kubelet[2614]: E0905 23:58:19.262326 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.262968 kubelet[2614]: E0905 23:58:19.262914 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.262968 kubelet[2614]: W0905 23:58:19.262933 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.262968 kubelet[2614]: E0905 23:58:19.262946 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.264457 kubelet[2614]: E0905 23:58:19.264405 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.264457 kubelet[2614]: W0905 23:58:19.264448 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.264565 kubelet[2614]: E0905 23:58:19.264464 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.265214 containerd[1480]: time="2025-09-05T23:58:19.265017957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:19.265214 containerd[1480]: time="2025-09-05T23:58:19.265092635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:19.265214 containerd[1480]: time="2025-09-05T23:58:19.265108194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:19.265343 kubelet[2614]: E0905 23:58:19.265196 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.265343 kubelet[2614]: W0905 23:58:19.265248 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.265343 kubelet[2614]: E0905 23:58:19.265262 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266561 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270034 kubelet[2614]: W0905 23:58:19.266580 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266593 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266803 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270034 kubelet[2614]: W0905 23:58:19.266811 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266821 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266961 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270034 kubelet[2614]: W0905 23:58:19.266968 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.266975 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270034 kubelet[2614]: E0905 23:58:19.267759 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270385 kubelet[2614]: W0905 23:58:19.267771 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.267782 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.268060 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270385 kubelet[2614]: W0905 23:58:19.268071 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.268080 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.268261 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270385 kubelet[2614]: W0905 23:58:19.268270 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.268279 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270385 kubelet[2614]: E0905 23:58:19.268505 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270385 kubelet[2614]: W0905 23:58:19.268515 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.268524 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.268797 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270598 kubelet[2614]: W0905 23:58:19.268807 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.268816 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.269319 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270598 kubelet[2614]: W0905 23:58:19.269331 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.269342 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.270282 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270598 kubelet[2614]: W0905 23:58:19.270294 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270598 kubelet[2614]: E0905 23:58:19.270305 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.270796 kubelet[2614]: E0905 23:58:19.270774 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.270796 kubelet[2614]: W0905 23:58:19.270785 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.270837 kubelet[2614]: E0905 23:58:19.270796 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.271750 kubelet[2614]: E0905 23:58:19.271721 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.271750 kubelet[2614]: W0905 23:58:19.271742 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.271750 kubelet[2614]: E0905 23:58:19.271754 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.276661 containerd[1480]: time="2025-09-05T23:58:19.272845168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:19.301891 kubelet[2614]: E0905 23:58:19.301786 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:19.301891 kubelet[2614]: W0905 23:58:19.301811 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:19.301891 kubelet[2614]: E0905 23:58:19.301834 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:19.311646 systemd[1]: Started cri-containerd-de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388.scope - libcontainer container de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388. Sep 5 23:58:19.357832 containerd[1480]: time="2025-09-05T23:58:19.357766237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p7bdn,Uid:16b9764e-dfbc-4d43-95b6-b8dca9d67dd0,Namespace:calico-system,Attempt:0,} returns sandbox id \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\"" Sep 5 23:58:20.745638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853218117.mount: Deactivated successfully. Sep 5 23:58:20.905534 kubelet[2614]: E0905 23:58:20.905324 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:21.793949 containerd[1480]: time="2025-09-05T23:58:21.793872775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:21.795980 containerd[1480]: time="2025-09-05T23:58:21.795920968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:58:21.797191 containerd[1480]: time="2025-09-05T23:58:21.797139659Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:21.799578 containerd[1480]: time="2025-09-05T23:58:21.799531924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:21.800994 containerd[1480]: time="2025-09-05T23:58:21.800943251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.561589315s" Sep 5 23:58:21.801064 containerd[1480]: time="2025-09-05T23:58:21.800994850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:58:21.802508 containerd[1480]: time="2025-09-05T23:58:21.802162343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:58:21.832767 containerd[1480]: time="2025-09-05T23:58:21.832360005Z" level=info msg="CreateContainer within sandbox \"81d5aff29c9dca3767e41e027fc4300454080e78bea70a2d78c17dacb3b1a9c6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:58:21.854991 containerd[1480]: time="2025-09-05T23:58:21.854805885Z" level=info msg="CreateContainer within sandbox \"81d5aff29c9dca3767e41e027fc4300454080e78bea70a2d78c17dacb3b1a9c6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8683e919b31427bc33c2b73fbaea7f27a96cdbc84659022e02c4ad8729122aef\"" Sep 5 23:58:21.857643 containerd[1480]: time="2025-09-05T23:58:21.857607141Z" level=info msg="StartContainer for \"8683e919b31427bc33c2b73fbaea7f27a96cdbc84659022e02c4ad8729122aef\"" Sep 5 23:58:21.899329 systemd[1]: Started cri-containerd-8683e919b31427bc33c2b73fbaea7f27a96cdbc84659022e02c4ad8729122aef.scope - libcontainer container 8683e919b31427bc33c2b73fbaea7f27a96cdbc84659022e02c4ad8729122aef. Sep 5 23:58:21.947809 containerd[1480]: time="2025-09-05T23:58:21.947703536Z" level=info msg="StartContainer for \"8683e919b31427bc33c2b73fbaea7f27a96cdbc84659022e02c4ad8729122aef\" returns successfully" Sep 5 23:58:22.056133 kubelet[2614]: E0905 23:58:22.055702 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.057612 kubelet[2614]: W0905 23:58:22.056775 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.057612 kubelet[2614]: E0905 23:58:22.056819 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.058169 kubelet[2614]: E0905 23:58:22.057983 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.058169 kubelet[2614]: W0905 23:58:22.058002 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.058169 kubelet[2614]: E0905 23:58:22.058058 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.059251 kubelet[2614]: E0905 23:58:22.059233 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.060616 kubelet[2614]: W0905 23:58:22.060456 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.060616 kubelet[2614]: E0905 23:58:22.060490 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.061264 kubelet[2614]: E0905 23:58:22.061250 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.061496 kubelet[2614]: W0905 23:58:22.061329 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.061496 kubelet[2614]: E0905 23:58:22.061348 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.061953 kubelet[2614]: E0905 23:58:22.061844 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.061953 kubelet[2614]: W0905 23:58:22.061858 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.061953 kubelet[2614]: E0905 23:58:22.061871 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.062235 kubelet[2614]: E0905 23:58:22.062221 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.062497 kubelet[2614]: W0905 23:58:22.062296 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.062497 kubelet[2614]: E0905 23:58:22.062316 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.063581 kubelet[2614]: E0905 23:58:22.063504 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.063581 kubelet[2614]: W0905 23:58:22.063576 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.063679 kubelet[2614]: E0905 23:58:22.063598 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.064056 kubelet[2614]: E0905 23:58:22.064024 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.064056 kubelet[2614]: W0905 23:58:22.064046 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.064056 kubelet[2614]: E0905 23:58:22.064059 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.064784 kubelet[2614]: E0905 23:58:22.064756 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.064784 kubelet[2614]: W0905 23:58:22.064779 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.064877 kubelet[2614]: E0905 23:58:22.064798 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.065572 kubelet[2614]: E0905 23:58:22.065409 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.065572 kubelet[2614]: W0905 23:58:22.065511 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.065572 kubelet[2614]: E0905 23:58:22.065525 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.065940 kubelet[2614]: E0905 23:58:22.065827 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.065940 kubelet[2614]: W0905 23:58:22.065845 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.065940 kubelet[2614]: E0905 23:58:22.065860 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.066086 kubelet[2614]: E0905 23:58:22.066075 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.066576 kubelet[2614]: W0905 23:58:22.066184 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.066713 kubelet[2614]: E0905 23:58:22.066693 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.067031 kubelet[2614]: E0905 23:58:22.067014 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.067415 kubelet[2614]: W0905 23:58:22.067187 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.067415 kubelet[2614]: E0905 23:58:22.067208 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.069598 kubelet[2614]: E0905 23:58:22.069565 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.069598 kubelet[2614]: W0905 23:58:22.069592 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.069768 kubelet[2614]: E0905 23:58:22.069614 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.069895 kubelet[2614]: E0905 23:58:22.069870 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.069895 kubelet[2614]: W0905 23:58:22.069888 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.069954 kubelet[2614]: E0905 23:58:22.069898 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.088599 kubelet[2614]: E0905 23:58:22.088557 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.088599 kubelet[2614]: W0905 23:58:22.088585 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.088599 kubelet[2614]: E0905 23:58:22.088605 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.088905 kubelet[2614]: E0905 23:58:22.088885 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.088905 kubelet[2614]: W0905 23:58:22.088903 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.088978 kubelet[2614]: E0905 23:58:22.088915 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.089532 kubelet[2614]: E0905 23:58:22.089505 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.089532 kubelet[2614]: W0905 23:58:22.089526 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.089631 kubelet[2614]: E0905 23:58:22.089540 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.089828 kubelet[2614]: E0905 23:58:22.089800 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.089828 kubelet[2614]: W0905 23:58:22.089820 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.089892 kubelet[2614]: E0905 23:58:22.089832 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.090019 kubelet[2614]: E0905 23:58:22.089997 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.090019 kubelet[2614]: W0905 23:58:22.090012 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.090085 kubelet[2614]: E0905 23:58:22.090023 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.090226 kubelet[2614]: E0905 23:58:22.090206 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.090226 kubelet[2614]: W0905 23:58:22.090220 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.090285 kubelet[2614]: E0905 23:58:22.090231 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.090439 kubelet[2614]: E0905 23:58:22.090412 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.090439 kubelet[2614]: W0905 23:58:22.090436 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.090585 kubelet[2614]: E0905 23:58:22.090446 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.092560 kubelet[2614]: E0905 23:58:22.092527 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.092560 kubelet[2614]: W0905 23:58:22.092552 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.092674 kubelet[2614]: E0905 23:58:22.092569 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.092927 kubelet[2614]: E0905 23:58:22.092901 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.092927 kubelet[2614]: W0905 23:58:22.092919 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.093004 kubelet[2614]: E0905 23:58:22.092931 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.093538 kubelet[2614]: E0905 23:58:22.093510 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.093538 kubelet[2614]: W0905 23:58:22.093530 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.093616 kubelet[2614]: E0905 23:58:22.093543 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.093711 kubelet[2614]: E0905 23:58:22.093692 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.093711 kubelet[2614]: W0905 23:58:22.093705 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.093771 kubelet[2614]: E0905 23:58:22.093715 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.093894 kubelet[2614]: E0905 23:58:22.093872 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.093894 kubelet[2614]: W0905 23:58:22.093886 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.093894 kubelet[2614]: E0905 23:58:22.093894 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094066 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.095279 kubelet[2614]: W0905 23:58:22.094081 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094089 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094336 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.095279 kubelet[2614]: W0905 23:58:22.094347 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094356 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094865 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.095279 kubelet[2614]: W0905 23:58:22.094877 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.094886 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.095279 kubelet[2614]: E0905 23:58:22.095045 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.095551 kubelet[2614]: W0905 23:58:22.095053 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.095551 kubelet[2614]: E0905 23:58:22.095061 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.095758 kubelet[2614]: E0905 23:58:22.095734 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.095758 kubelet[2614]: W0905 23:58:22.095751 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.095827 kubelet[2614]: E0905 23:58:22.095763 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.096006 kubelet[2614]: E0905 23:58:22.095979 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:22.096006 kubelet[2614]: W0905 23:58:22.095997 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:22.096006 kubelet[2614]: E0905 23:58:22.096007 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:22.906063 kubelet[2614]: E0905 23:58:22.905981 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:23.003329 kubelet[2614]: I0905 23:58:23.003265 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:58:23.075826 kubelet[2614]: E0905 23:58:23.075765 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.075826 kubelet[2614]: W0905 23:58:23.075800 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.075826 kubelet[2614]: E0905 23:58:23.075829 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.076412 kubelet[2614]: E0905 23:58:23.076145 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.076412 kubelet[2614]: W0905 23:58:23.076158 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.076412 kubelet[2614]: E0905 23:58:23.076173 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.076584 kubelet[2614]: E0905 23:58:23.076481 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.076584 kubelet[2614]: W0905 23:58:23.076493 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.076584 kubelet[2614]: E0905 23:58:23.076507 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.076829 kubelet[2614]: E0905 23:58:23.076769 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.076829 kubelet[2614]: W0905 23:58:23.076794 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.076829 kubelet[2614]: E0905 23:58:23.076809 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.077150 kubelet[2614]: E0905 23:58:23.077109 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.077150 kubelet[2614]: W0905 23:58:23.077130 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.077150 kubelet[2614]: E0905 23:58:23.077145 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.077437 kubelet[2614]: E0905 23:58:23.077385 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.077503 kubelet[2614]: W0905 23:58:23.077405 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.077503 kubelet[2614]: E0905 23:58:23.077467 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.077738 kubelet[2614]: E0905 23:58:23.077717 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.077797 kubelet[2614]: W0905 23:58:23.077743 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.077797 kubelet[2614]: E0905 23:58:23.077757 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.078033 kubelet[2614]: E0905 23:58:23.078012 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.078033 kubelet[2614]: W0905 23:58:23.078030 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.078149 kubelet[2614]: E0905 23:58:23.078105 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.078553 kubelet[2614]: E0905 23:58:23.078529 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.078553 kubelet[2614]: W0905 23:58:23.078550 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.078675 kubelet[2614]: E0905 23:58:23.078567 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.078918 kubelet[2614]: E0905 23:58:23.078891 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.078918 kubelet[2614]: W0905 23:58:23.078915 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.079021 kubelet[2614]: E0905 23:58:23.078929 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.079233 kubelet[2614]: E0905 23:58:23.079208 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.079310 kubelet[2614]: W0905 23:58:23.079234 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.079310 kubelet[2614]: E0905 23:58:23.079253 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.079903 kubelet[2614]: E0905 23:58:23.079864 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.079903 kubelet[2614]: W0905 23:58:23.079897 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.080140 kubelet[2614]: E0905 23:58:23.079920 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.080509 kubelet[2614]: E0905 23:58:23.080467 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.080509 kubelet[2614]: W0905 23:58:23.080507 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.080645 kubelet[2614]: E0905 23:58:23.080540 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.081070 kubelet[2614]: E0905 23:58:23.080984 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.081070 kubelet[2614]: W0905 23:58:23.081007 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.081070 kubelet[2614]: E0905 23:58:23.081020 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.081307 kubelet[2614]: E0905 23:58:23.081281 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.081307 kubelet[2614]: W0905 23:58:23.081291 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.081307 kubelet[2614]: E0905 23:58:23.081301 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.097249 kubelet[2614]: E0905 23:58:23.097182 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.097249 kubelet[2614]: W0905 23:58:23.097227 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.097480 kubelet[2614]: E0905 23:58:23.097260 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.097715 kubelet[2614]: E0905 23:58:23.097695 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.097774 kubelet[2614]: W0905 23:58:23.097718 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.097774 kubelet[2614]: E0905 23:58:23.097738 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.098176 kubelet[2614]: E0905 23:58:23.098153 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.098243 kubelet[2614]: W0905 23:58:23.098179 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.098243 kubelet[2614]: E0905 23:58:23.098200 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.098750 kubelet[2614]: E0905 23:58:23.098696 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.098750 kubelet[2614]: W0905 23:58:23.098733 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.098885 kubelet[2614]: E0905 23:58:23.098758 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.099161 kubelet[2614]: E0905 23:58:23.099142 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.099206 kubelet[2614]: W0905 23:58:23.099165 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.099206 kubelet[2614]: E0905 23:58:23.099187 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.099558 kubelet[2614]: E0905 23:58:23.099537 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.099608 kubelet[2614]: W0905 23:58:23.099562 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.099608 kubelet[2614]: E0905 23:58:23.099580 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.100035 kubelet[2614]: E0905 23:58:23.100014 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.100100 kubelet[2614]: W0905 23:58:23.100038 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.100100 kubelet[2614]: E0905 23:58:23.100059 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.100537 kubelet[2614]: E0905 23:58:23.100490 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.100537 kubelet[2614]: W0905 23:58:23.100513 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.100537 kubelet[2614]: E0905 23:58:23.100530 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.100889 kubelet[2614]: E0905 23:58:23.100866 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.100889 kubelet[2614]: W0905 23:58:23.100885 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.101055 kubelet[2614]: E0905 23:58:23.100902 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.101830 kubelet[2614]: E0905 23:58:23.101798 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.101830 kubelet[2614]: W0905 23:58:23.101826 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.102015 kubelet[2614]: E0905 23:58:23.101846 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.102175 kubelet[2614]: E0905 23:58:23.102147 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.102175 kubelet[2614]: W0905 23:58:23.102166 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.102175 kubelet[2614]: E0905 23:58:23.102176 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.102405 kubelet[2614]: E0905 23:58:23.102389 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.102405 kubelet[2614]: W0905 23:58:23.102404 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.102748 kubelet[2614]: E0905 23:58:23.102413 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.102748 kubelet[2614]: E0905 23:58:23.102600 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.102748 kubelet[2614]: W0905 23:58:23.102608 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.102748 kubelet[2614]: E0905 23:58:23.102617 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.103476 kubelet[2614]: E0905 23:58:23.103156 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.103476 kubelet[2614]: W0905 23:58:23.103174 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.103476 kubelet[2614]: E0905 23:58:23.103189 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.104364 kubelet[2614]: E0905 23:58:23.104046 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.104364 kubelet[2614]: W0905 23:58:23.104066 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.104364 kubelet[2614]: E0905 23:58:23.104093 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.104573 kubelet[2614]: E0905 23:58:23.104344 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.104573 kubelet[2614]: W0905 23:58:23.104407 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.104573 kubelet[2614]: E0905 23:58:23.104483 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.104975 kubelet[2614]: E0905 23:58:23.104958 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.105026 kubelet[2614]: W0905 23:58:23.104975 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.105026 kubelet[2614]: E0905 23:58:23.104989 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.105620 kubelet[2614]: E0905 23:58:23.105585 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:58:23.105620 kubelet[2614]: W0905 23:58:23.105604 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:58:23.105620 kubelet[2614]: E0905 23:58:23.105617 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:58:23.304204 containerd[1480]: time="2025-09-05T23:58:23.301328910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:23.306026 containerd[1480]: time="2025-09-05T23:58:23.305972447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:58:23.308777 containerd[1480]: time="2025-09-05T23:58:23.308243196Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:23.315902 containerd[1480]: time="2025-09-05T23:58:23.315294520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:23.317517 containerd[1480]: time="2025-09-05T23:58:23.316235579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.514033636s" Sep 5 23:58:23.317517 containerd[1480]: time="2025-09-05T23:58:23.317021641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:58:23.325864 containerd[1480]: time="2025-09-05T23:58:23.325375255Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:58:23.342931 containerd[1480]: time="2025-09-05T23:58:23.342867626Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0\"" Sep 5 23:58:23.343408 containerd[1480]: time="2025-09-05T23:58:23.343386255Z" level=info msg="StartContainer for \"205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0\"" Sep 5 23:58:23.380640 systemd[1]: Started cri-containerd-205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0.scope - libcontainer container 205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0. Sep 5 23:58:23.430509 containerd[1480]: time="2025-09-05T23:58:23.428663678Z" level=info msg="StartContainer for \"205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0\" returns successfully" Sep 5 23:58:23.441054 systemd[1]: cri-containerd-205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0.scope: Deactivated successfully. Sep 5 23:58:23.475137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0-rootfs.mount: Deactivated successfully. Sep 5 23:58:23.578180 containerd[1480]: time="2025-09-05T23:58:23.578051635Z" level=info msg="shim disconnected" id=205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0 namespace=k8s.io Sep 5 23:58:23.578180 containerd[1480]: time="2025-09-05T23:58:23.578174753Z" level=warning msg="cleaning up after shim disconnected" id=205fd978ca24cb805946e3448079d3718fe5477dceebaf2aec402466093c92a0 namespace=k8s.io Sep 5 23:58:23.578180 containerd[1480]: time="2025-09-05T23:58:23.578184913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:58:24.010463 containerd[1480]: time="2025-09-05T23:58:24.010283025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:58:24.036905 kubelet[2614]: I0905 23:58:24.036606 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9d85d494f-wmr95" podStartSLOduration=3.472470516 podStartE2EDuration="6.036496253s" podCreationTimestamp="2025-09-05 23:58:18 +0000 UTC" firstStartedPulling="2025-09-05 23:58:19.23795977 +0000 UTC m=+21.477112159" lastFinishedPulling="2025-09-05 23:58:21.801985507 +0000 UTC m=+24.041137896" observedRunningTime="2025-09-05 23:58:22.025450189 +0000 UTC m=+24.264602618" watchObservedRunningTime="2025-09-05 23:58:24.036496253 +0000 UTC m=+26.275648642" Sep 5 23:58:24.904657 kubelet[2614]: E0905 23:58:24.904529 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:26.905060 kubelet[2614]: E0905 23:58:26.904781 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:27.609561 containerd[1480]: time="2025-09-05T23:58:27.609452476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:27.611599 containerd[1480]: time="2025-09-05T23:58:27.611548832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:58:27.613234 containerd[1480]: time="2025-09-05T23:58:27.613147519Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:27.619882 containerd[1480]: time="2025-09-05T23:58:27.619831300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:27.621689 containerd[1480]: time="2025-09-05T23:58:27.621633423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.611291719s" Sep 5 23:58:27.621846 containerd[1480]: time="2025-09-05T23:58:27.621828139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:58:27.627929 containerd[1480]: time="2025-09-05T23:58:27.627885453Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:58:27.649292 containerd[1480]: time="2025-09-05T23:58:27.649195010Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999\"" Sep 5 23:58:27.652584 containerd[1480]: time="2025-09-05T23:58:27.652542141Z" level=info msg="StartContainer for \"74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999\"" Sep 5 23:58:27.683665 systemd[1]: Started cri-containerd-74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999.scope - libcontainer container 74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999. Sep 5 23:58:27.720871 containerd[1480]: time="2025-09-05T23:58:27.720803483Z" level=info msg="StartContainer for \"74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999\" returns successfully" Sep 5 23:58:28.278998 containerd[1480]: time="2025-09-05T23:58:28.278916939Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:58:28.282181 systemd[1]: cri-containerd-74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999.scope: Deactivated successfully. Sep 5 23:58:28.305084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999-rootfs.mount: Deactivated successfully. Sep 5 23:58:28.326753 kubelet[2614]: I0905 23:58:28.326080 2614 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:58:28.394991 containerd[1480]: time="2025-09-05T23:58:28.394863887Z" level=info msg="shim disconnected" id=74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999 namespace=k8s.io Sep 5 23:58:28.394991 containerd[1480]: time="2025-09-05T23:58:28.394945726Z" level=warning msg="cleaning up after shim disconnected" id=74eccd3540a3a3695c551c8d3f195fa969bc0f30c9115f00835c8b6d9aa23999 namespace=k8s.io Sep 5 23:58:28.394991 containerd[1480]: time="2025-09-05T23:58:28.394955805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:58:28.432072 systemd[1]: Created slice kubepods-besteffort-pod0cbb4f78_12a8_4979_8eea_c1da54e61850.slice - libcontainer container kubepods-besteffort-pod0cbb4f78_12a8_4979_8eea_c1da54e61850.slice. Sep 5 23:58:28.445256 kubelet[2614]: I0905 23:58:28.444213 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-ca-bundle\") pod \"whisker-66b8b66984-qdr7l\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " pod="calico-system/whisker-66b8b66984-qdr7l" Sep 5 23:58:28.445256 kubelet[2614]: I0905 23:58:28.444261 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a2c532-c00e-4e75-a9f5-90ed8209a139-config\") pod \"goldmane-54d579b49d-t7j5b\" (UID: \"79a2c532-c00e-4e75-a9f5-90ed8209a139\") " pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:28.445256 kubelet[2614]: I0905 23:58:28.444280 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/79a2c532-c00e-4e75-a9f5-90ed8209a139-goldmane-key-pair\") pod \"goldmane-54d579b49d-t7j5b\" (UID: \"79a2c532-c00e-4e75-a9f5-90ed8209a139\") " pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:28.445256 kubelet[2614]: I0905 23:58:28.444294 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xcx2\" (UniqueName: \"kubernetes.io/projected/79a2c532-c00e-4e75-a9f5-90ed8209a139-kube-api-access-8xcx2\") pod \"goldmane-54d579b49d-t7j5b\" (UID: \"79a2c532-c00e-4e75-a9f5-90ed8209a139\") " pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:28.445256 kubelet[2614]: I0905 23:58:28.444313 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c543c41c-ee3f-499e-8d6b-b62b005decb4-config-volume\") pod \"coredns-674b8bbfcf-8kktj\" (UID: \"c543c41c-ee3f-499e-8d6b-b62b005decb4\") " pod="kube-system/coredns-674b8bbfcf-8kktj" Sep 5 23:58:28.445511 kubelet[2614]: I0905 23:58:28.444333 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldb92\" (UniqueName: \"kubernetes.io/projected/0cbb4f78-12a8-4979-8eea-c1da54e61850-kube-api-access-ldb92\") pod \"whisker-66b8b66984-qdr7l\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " pod="calico-system/whisker-66b8b66984-qdr7l" Sep 5 23:58:28.445511 kubelet[2614]: I0905 23:58:28.444350 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7t6\" (UniqueName: \"kubernetes.io/projected/fc1e0fa4-e565-4b6d-a320-3ab660954c63-kube-api-access-lz7t6\") pod \"coredns-674b8bbfcf-kgz7b\" (UID: \"fc1e0fa4-e565-4b6d-a320-3ab660954c63\") " pod="kube-system/coredns-674b8bbfcf-kgz7b" Sep 5 23:58:28.445511 kubelet[2614]: I0905 23:58:28.444365 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a2c532-c00e-4e75-a9f5-90ed8209a139-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-t7j5b\" (UID: \"79a2c532-c00e-4e75-a9f5-90ed8209a139\") " pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:28.445511 kubelet[2614]: I0905 23:58:28.444381 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95gc\" (UniqueName: \"kubernetes.io/projected/62b3a3df-9267-4dc2-9423-4fc82ad97b42-kube-api-access-l95gc\") pod \"calico-kube-controllers-7cb67bb5b6-b9xqc\" (UID: \"62b3a3df-9267-4dc2-9423-4fc82ad97b42\") " pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" Sep 5 23:58:28.445511 kubelet[2614]: I0905 23:58:28.444399 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-backend-key-pair\") pod \"whisker-66b8b66984-qdr7l\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " pod="calico-system/whisker-66b8b66984-qdr7l" Sep 5 23:58:28.445626 kubelet[2614]: I0905 23:58:28.444413 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfd2\" (UniqueName: \"kubernetes.io/projected/c543c41c-ee3f-499e-8d6b-b62b005decb4-kube-api-access-5pfd2\") pod \"coredns-674b8bbfcf-8kktj\" (UID: \"c543c41c-ee3f-499e-8d6b-b62b005decb4\") " pod="kube-system/coredns-674b8bbfcf-8kktj" Sep 5 23:58:28.445626 kubelet[2614]: I0905 23:58:28.444450 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b3a3df-9267-4dc2-9423-4fc82ad97b42-tigera-ca-bundle\") pod \"calico-kube-controllers-7cb67bb5b6-b9xqc\" (UID: \"62b3a3df-9267-4dc2-9423-4fc82ad97b42\") " pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" Sep 5 23:58:28.445626 kubelet[2614]: I0905 23:58:28.444467 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc1e0fa4-e565-4b6d-a320-3ab660954c63-config-volume\") pod \"coredns-674b8bbfcf-kgz7b\" (UID: \"fc1e0fa4-e565-4b6d-a320-3ab660954c63\") " pod="kube-system/coredns-674b8bbfcf-kgz7b" Sep 5 23:58:28.461490 systemd[1]: Created slice kubepods-besteffort-pod79a2c532_c00e_4e75_a9f5_90ed8209a139.slice - libcontainer container kubepods-besteffort-pod79a2c532_c00e_4e75_a9f5_90ed8209a139.slice. Sep 5 23:58:28.471805 systemd[1]: Created slice kubepods-burstable-podc543c41c_ee3f_499e_8d6b_b62b005decb4.slice - libcontainer container kubepods-burstable-podc543c41c_ee3f_499e_8d6b_b62b005decb4.slice. Sep 5 23:58:28.481209 systemd[1]: Created slice kubepods-burstable-podfc1e0fa4_e565_4b6d_a320_3ab660954c63.slice - libcontainer container kubepods-burstable-podfc1e0fa4_e565_4b6d_a320_3ab660954c63.slice. Sep 5 23:58:28.493616 systemd[1]: Created slice kubepods-besteffort-pod62b3a3df_9267_4dc2_9423_4fc82ad97b42.slice - libcontainer container kubepods-besteffort-pod62b3a3df_9267_4dc2_9423_4fc82ad97b42.slice. Sep 5 23:58:28.502332 systemd[1]: Created slice kubepods-besteffort-pod84038953_2a1d_453b_a995_2e6cd5cc7120.slice - libcontainer container kubepods-besteffort-pod84038953_2a1d_453b_a995_2e6cd5cc7120.slice. Sep 5 23:58:28.514227 systemd[1]: Created slice kubepods-besteffort-podd5bacc15_80ca_43c3_bafd_f08e810b113d.slice - libcontainer container kubepods-besteffort-podd5bacc15_80ca_43c3_bafd_f08e810b113d.slice. Sep 5 23:58:28.525304 systemd[1]: Created slice kubepods-besteffort-podcbe0b2e4_eb4a_4cc8_acdc_005b19facc59.slice - libcontainer container kubepods-besteffort-podcbe0b2e4_eb4a_4cc8_acdc_005b19facc59.slice. Sep 5 23:58:28.545125 kubelet[2614]: I0905 23:58:28.544943 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84038953-2a1d-453b-a995-2e6cd5cc7120-calico-apiserver-certs\") pod \"calico-apiserver-685cf96569-gzvsl\" (UID: \"84038953-2a1d-453b-a995-2e6cd5cc7120\") " pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" Sep 5 23:58:28.545125 kubelet[2614]: I0905 23:58:28.545043 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5bacc15-80ca-43c3-bafd-f08e810b113d-calico-apiserver-certs\") pod \"calico-apiserver-648b95987d-x5bx7\" (UID: \"d5bacc15-80ca-43c3-bafd-f08e810b113d\") " pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" Sep 5 23:58:28.545125 kubelet[2614]: I0905 23:58:28.545074 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-calico-apiserver-certs\") pod \"calico-apiserver-648b95987d-pmqf8\" (UID: \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\") " pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" Sep 5 23:58:28.545125 kubelet[2614]: I0905 23:58:28.545095 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7fh\" (UniqueName: \"kubernetes.io/projected/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-kube-api-access-6r7fh\") pod \"calico-apiserver-648b95987d-pmqf8\" (UID: \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\") " pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" Sep 5 23:58:28.547382 kubelet[2614]: I0905 23:58:28.547248 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9xx\" (UniqueName: \"kubernetes.io/projected/d5bacc15-80ca-43c3-bafd-f08e810b113d-kube-api-access-hr9xx\") pod \"calico-apiserver-648b95987d-x5bx7\" (UID: \"d5bacc15-80ca-43c3-bafd-f08e810b113d\") " pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" Sep 5 23:58:28.551658 kubelet[2614]: I0905 23:58:28.549625 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshj8\" (UniqueName: \"kubernetes.io/projected/84038953-2a1d-453b-a995-2e6cd5cc7120-kube-api-access-fshj8\") pod \"calico-apiserver-685cf96569-gzvsl\" (UID: \"84038953-2a1d-453b-a995-2e6cd5cc7120\") " pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" Sep 5 23:58:28.746893 containerd[1480]: time="2025-09-05T23:58:28.746837808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8b66984-qdr7l,Uid:0cbb4f78-12a8-4979-8eea-c1da54e61850,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:28.774453 containerd[1480]: time="2025-09-05T23:58:28.774396404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-t7j5b,Uid:79a2c532-c00e-4e75-a9f5-90ed8209a139,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:28.777318 containerd[1480]: time="2025-09-05T23:58:28.777257185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kktj,Uid:c543c41c-ee3f-499e-8d6b-b62b005decb4,Namespace:kube-system,Attempt:0,}" Sep 5 23:58:28.786449 containerd[1480]: time="2025-09-05T23:58:28.786389279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kgz7b,Uid:fc1e0fa4-e565-4b6d-a320-3ab660954c63,Namespace:kube-system,Attempt:0,}" Sep 5 23:58:28.799339 containerd[1480]: time="2025-09-05T23:58:28.799217456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb67bb5b6-b9xqc,Uid:62b3a3df-9267-4dc2-9423-4fc82ad97b42,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:28.808710 containerd[1480]: time="2025-09-05T23:58:28.808661103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-gzvsl,Uid:84038953-2a1d-453b-a995-2e6cd5cc7120,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:58:28.825484 containerd[1480]: time="2025-09-05T23:58:28.825403960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-x5bx7,Uid:d5bacc15-80ca-43c3-bafd-f08e810b113d,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:58:28.839299 containerd[1480]: time="2025-09-05T23:58:28.839259557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-pmqf8,Uid:cbe0b2e4-eb4a-4cc8-acdc-005b19facc59,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:58:28.916037 systemd[1]: Created slice kubepods-besteffort-podb841a2a5_b2e2_4dd3_a133_b08f780b324f.slice - libcontainer container kubepods-besteffort-podb841a2a5_b2e2_4dd3_a133_b08f780b324f.slice. Sep 5 23:58:28.933525 containerd[1480]: time="2025-09-05T23:58:28.933469630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tcpm,Uid:b841a2a5-b2e2-4dd3-a133-b08f780b324f,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:28.992791 containerd[1480]: time="2025-09-05T23:58:28.992637100Z" level=error msg="Failed to destroy network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.007484 containerd[1480]: time="2025-09-05T23:58:29.006706933Z" level=error msg="encountered an error cleaning up failed sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.007484 containerd[1480]: time="2025-09-05T23:58:29.006789452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8b66984-qdr7l,Uid:0cbb4f78-12a8-4979-8eea-c1da54e61850,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.007948 kubelet[2614]: E0905 23:58:29.007040 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.007948 kubelet[2614]: E0905 23:58:29.007186 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8b66984-qdr7l" Sep 5 23:58:29.007948 kubelet[2614]: E0905 23:58:29.007208 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8b66984-qdr7l" Sep 5 23:58:29.008059 kubelet[2614]: E0905 23:58:29.007265 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66b8b66984-qdr7l_calico-system(0cbb4f78-12a8-4979-8eea-c1da54e61850)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66b8b66984-qdr7l_calico-system(0cbb4f78-12a8-4979-8eea-c1da54e61850)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66b8b66984-qdr7l" podUID="0cbb4f78-12a8-4979-8eea-c1da54e61850" Sep 5 23:58:29.042715 containerd[1480]: time="2025-09-05T23:58:29.042113260Z" level=error msg="Failed to destroy network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.042715 containerd[1480]: time="2025-09-05T23:58:29.042165139Z" level=error msg="Failed to destroy network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.045406 containerd[1480]: time="2025-09-05T23:58:29.045073720Z" level=error msg="encountered an error cleaning up failed sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.052621 containerd[1480]: time="2025-09-05T23:58:29.051029640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kktj,Uid:c543c41c-ee3f-499e-8d6b-b62b005decb4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.055095 containerd[1480]: time="2025-09-05T23:58:29.047885503Z" level=error msg="encountered an error cleaning up failed sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.055930 containerd[1480]: time="2025-09-05T23:58:29.054991280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-t7j5b,Uid:79a2c532-c00e-4e75-a9f5-90ed8209a139,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.056065 kubelet[2614]: E0905 23:58:29.055653 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.056065 kubelet[2614]: E0905 23:58:29.055700 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:29.056065 kubelet[2614]: E0905 23:58:29.055719 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-t7j5b" Sep 5 23:58:29.056151 kubelet[2614]: E0905 23:58:29.055769 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-t7j5b_calico-system(79a2c532-c00e-4e75-a9f5-90ed8209a139)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-t7j5b_calico-system(79a2c532-c00e-4e75-a9f5-90ed8209a139)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-t7j5b" podUID="79a2c532-c00e-4e75-a9f5-90ed8209a139" Sep 5 23:58:29.056151 kubelet[2614]: E0905 23:58:29.055607 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.056151 kubelet[2614]: E0905 23:58:29.055834 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kktj" Sep 5 23:58:29.057745 kubelet[2614]: E0905 23:58:29.055848 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kktj" Sep 5 23:58:29.057745 kubelet[2614]: E0905 23:58:29.055886 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8kktj_kube-system(c543c41c-ee3f-499e-8d6b-b62b005decb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8kktj_kube-system(c543c41c-ee3f-499e-8d6b-b62b005decb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8kktj" podUID="c543c41c-ee3f-499e-8d6b-b62b005decb4" Sep 5 23:58:29.057844 containerd[1480]: time="2025-09-05T23:58:29.057220075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:58:29.061516 kubelet[2614]: I0905 23:58:29.060875 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:29.064953 containerd[1480]: time="2025-09-05T23:58:29.064841401Z" level=info msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" Sep 5 23:58:29.066687 containerd[1480]: time="2025-09-05T23:58:29.066644805Z" level=info msg="Ensure that sandbox e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3 in task-service has been cleanup successfully" Sep 5 23:58:29.105499 containerd[1480]: time="2025-09-05T23:58:29.105309185Z" level=error msg="Failed to destroy network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.106032 containerd[1480]: time="2025-09-05T23:58:29.106000132Z" level=error msg="encountered an error cleaning up failed sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.106179 containerd[1480]: time="2025-09-05T23:58:29.106157488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kgz7b,Uid:fc1e0fa4-e565-4b6d-a320-3ab660954c63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.106948 kubelet[2614]: E0905 23:58:29.106773 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.106948 kubelet[2614]: E0905 23:58:29.106841 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kgz7b" Sep 5 23:58:29.106948 kubelet[2614]: E0905 23:58:29.106861 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kgz7b" Sep 5 23:58:29.108122 kubelet[2614]: E0905 23:58:29.107307 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kgz7b_kube-system(fc1e0fa4-e565-4b6d-a320-3ab660954c63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kgz7b_kube-system(fc1e0fa4-e565-4b6d-a320-3ab660954c63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kgz7b" podUID="fc1e0fa4-e565-4b6d-a320-3ab660954c63" Sep 5 23:58:29.155228 containerd[1480]: time="2025-09-05T23:58:29.154072882Z" level=error msg="Failed to destroy network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.157587 containerd[1480]: time="2025-09-05T23:58:29.157523613Z" level=error msg="encountered an error cleaning up failed sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.157719 containerd[1480]: time="2025-09-05T23:58:29.157604931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-gzvsl,Uid:84038953-2a1d-453b-a995-2e6cd5cc7120,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.158462 kubelet[2614]: E0905 23:58:29.158409 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.158550 kubelet[2614]: E0905 23:58:29.158481 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" Sep 5 23:58:29.158550 kubelet[2614]: E0905 23:58:29.158502 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" Sep 5 23:58:29.158636 kubelet[2614]: E0905 23:58:29.158562 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-685cf96569-gzvsl_calico-apiserver(84038953-2a1d-453b-a995-2e6cd5cc7120)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-685cf96569-gzvsl_calico-apiserver(84038953-2a1d-453b-a995-2e6cd5cc7120)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" podUID="84038953-2a1d-453b-a995-2e6cd5cc7120" Sep 5 23:58:29.168140 containerd[1480]: time="2025-09-05T23:58:29.168086800Z" level=error msg="Failed to destroy network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.168679 containerd[1480]: time="2025-09-05T23:58:29.168649468Z" level=error msg="encountered an error cleaning up failed sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.168875 containerd[1480]: time="2025-09-05T23:58:29.168848824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb67bb5b6-b9xqc,Uid:62b3a3df-9267-4dc2-9423-4fc82ad97b42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.169338 kubelet[2614]: E0905 23:58:29.169195 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.169338 kubelet[2614]: E0905 23:58:29.169254 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" Sep 5 23:58:29.169338 kubelet[2614]: E0905 23:58:29.169292 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" Sep 5 23:58:29.169554 kubelet[2614]: E0905 23:58:29.169342 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb67bb5b6-b9xqc_calico-system(62b3a3df-9267-4dc2-9423-4fc82ad97b42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb67bb5b6-b9xqc_calico-system(62b3a3df-9267-4dc2-9423-4fc82ad97b42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" podUID="62b3a3df-9267-4dc2-9423-4fc82ad97b42" Sep 5 23:58:29.177849 containerd[1480]: time="2025-09-05T23:58:29.177675006Z" level=error msg="Failed to destroy network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.179065 containerd[1480]: time="2025-09-05T23:58:29.178019359Z" level=error msg="encountered an error cleaning up failed sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.179065 containerd[1480]: time="2025-09-05T23:58:29.178071478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-x5bx7,Uid:d5bacc15-80ca-43c3-bafd-f08e810b113d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.179787 kubelet[2614]: E0905 23:58:29.179364 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.179787 kubelet[2614]: E0905 23:58:29.179453 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" Sep 5 23:58:29.179787 kubelet[2614]: E0905 23:58:29.179480 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" Sep 5 23:58:29.180641 kubelet[2614]: E0905 23:58:29.179529 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-648b95987d-x5bx7_calico-apiserver(d5bacc15-80ca-43c3-bafd-f08e810b113d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-648b95987d-x5bx7_calico-apiserver(d5bacc15-80ca-43c3-bafd-f08e810b113d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" podUID="d5bacc15-80ca-43c3-bafd-f08e810b113d" Sep 5 23:58:29.183596 containerd[1480]: time="2025-09-05T23:58:29.183550248Z" level=error msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" failed" error="failed to destroy network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.184632 kubelet[2614]: E0905 23:58:29.184541 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:29.184816 kubelet[2614]: E0905 23:58:29.184761 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3"} Sep 5 23:58:29.184931 kubelet[2614]: E0905 23:58:29.184898 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0cbb4f78-12a8-4979-8eea-c1da54e61850\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:29.185048 kubelet[2614]: E0905 23:58:29.185028 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0cbb4f78-12a8-4979-8eea-c1da54e61850\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66b8b66984-qdr7l" podUID="0cbb4f78-12a8-4979-8eea-c1da54e61850" Sep 5 23:58:29.189397 containerd[1480]: time="2025-09-05T23:58:29.189343371Z" level=error msg="Failed to destroy network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.189918 containerd[1480]: time="2025-09-05T23:58:29.189871441Z" level=error msg="encountered an error cleaning up failed sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.190053 containerd[1480]: time="2025-09-05T23:58:29.190019518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tcpm,Uid:b841a2a5-b2e2-4dd3-a133-b08f780b324f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.190474 kubelet[2614]: E0905 23:58:29.190403 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.192064 kubelet[2614]: E0905 23:58:29.191682 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:29.192064 kubelet[2614]: E0905 23:58:29.191726 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tcpm" Sep 5 23:58:29.192064 kubelet[2614]: E0905 23:58:29.191779 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7tcpm_calico-system(b841a2a5-b2e2-4dd3-a133-b08f780b324f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7tcpm_calico-system(b841a2a5-b2e2-4dd3-a133-b08f780b324f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:29.194540 containerd[1480]: time="2025-09-05T23:58:29.194486107Z" level=error msg="Failed to destroy network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.195086 containerd[1480]: time="2025-09-05T23:58:29.195052496Z" level=error msg="encountered an error cleaning up failed sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.195209 containerd[1480]: time="2025-09-05T23:58:29.195187093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-pmqf8,Uid:cbe0b2e4-eb4a-4cc8-acdc-005b19facc59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.196218 kubelet[2614]: E0905 23:58:29.196178 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:29.196362 kubelet[2614]: E0905 23:58:29.196344 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" Sep 5 23:58:29.196500 kubelet[2614]: E0905 23:58:29.196479 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" Sep 5 23:58:29.196673 kubelet[2614]: E0905 23:58:29.196623 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-648b95987d-pmqf8_calico-apiserver(cbe0b2e4-eb4a-4cc8-acdc-005b19facc59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-648b95987d-pmqf8_calico-apiserver(cbe0b2e4-eb4a-4cc8-acdc-005b19facc59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" podUID="cbe0b2e4-eb4a-4cc8-acdc-005b19facc59" Sep 5 23:58:30.065500 kubelet[2614]: I0905 23:58:30.065454 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:30.067533 kubelet[2614]: I0905 23:58:30.067491 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:30.068493 containerd[1480]: time="2025-09-05T23:58:30.068393266Z" level=info msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" Sep 5 23:58:30.068767 containerd[1480]: time="2025-09-05T23:58:30.068684620Z" level=info msg="Ensure that sandbox 2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75 in task-service has been cleanup successfully" Sep 5 23:58:30.072473 containerd[1480]: time="2025-09-05T23:58:30.072343268Z" level=info msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" Sep 5 23:58:30.072764 containerd[1480]: time="2025-09-05T23:58:30.072663621Z" level=info msg="Ensure that sandbox 52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c in task-service has been cleanup successfully" Sep 5 23:58:30.074257 kubelet[2614]: I0905 23:58:30.074049 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:30.084084 containerd[1480]: time="2025-09-05T23:58:30.081966316Z" level=info msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" Sep 5 23:58:30.084084 containerd[1480]: time="2025-09-05T23:58:30.082312229Z" level=info msg="Ensure that sandbox a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471 in task-service has been cleanup successfully" Sep 5 23:58:30.085940 kubelet[2614]: I0905 23:58:30.085858 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:30.088064 containerd[1480]: time="2025-09-05T23:58:30.086869099Z" level=info msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" Sep 5 23:58:30.088064 containerd[1480]: time="2025-09-05T23:58:30.087129214Z" level=info msg="Ensure that sandbox 2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd in task-service has been cleanup successfully" Sep 5 23:58:30.097603 kubelet[2614]: I0905 23:58:30.097571 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:30.099358 containerd[1480]: time="2025-09-05T23:58:30.099307051Z" level=info msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" Sep 5 23:58:30.099853 containerd[1480]: time="2025-09-05T23:58:30.099832441Z" level=info msg="Ensure that sandbox 1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae in task-service has been cleanup successfully" Sep 5 23:58:30.101981 kubelet[2614]: I0905 23:58:30.101955 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:30.102949 containerd[1480]: time="2025-09-05T23:58:30.102903700Z" level=info msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" Sep 5 23:58:30.103135 containerd[1480]: time="2025-09-05T23:58:30.103113056Z" level=info msg="Ensure that sandbox 7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76 in task-service has been cleanup successfully" Sep 5 23:58:30.111847 kubelet[2614]: I0905 23:58:30.111814 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:30.114920 containerd[1480]: time="2025-09-05T23:58:30.114488590Z" level=info msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" Sep 5 23:58:30.116333 containerd[1480]: time="2025-09-05T23:58:30.115672486Z" level=info msg="Ensure that sandbox 88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727 in task-service has been cleanup successfully" Sep 5 23:58:30.124129 kubelet[2614]: I0905 23:58:30.124100 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:30.126264 containerd[1480]: time="2025-09-05T23:58:30.125749926Z" level=info msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" Sep 5 23:58:30.127743 containerd[1480]: time="2025-09-05T23:58:30.127366773Z" level=info msg="Ensure that sandbox b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555 in task-service has been cleanup successfully" Sep 5 23:58:30.196442 containerd[1480]: time="2025-09-05T23:58:30.196379321Z" level=error msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" failed" error="failed to destroy network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.197018 kubelet[2614]: E0905 23:58:30.196805 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:30.197018 kubelet[2614]: E0905 23:58:30.196853 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c"} Sep 5 23:58:30.197018 kubelet[2614]: E0905 23:58:30.196931 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.197018 kubelet[2614]: E0905 23:58:30.196956 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b841a2a5-b2e2-4dd3-a133-b08f780b324f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tcpm" podUID="b841a2a5-b2e2-4dd3-a133-b08f780b324f" Sep 5 23:58:30.208133 containerd[1480]: time="2025-09-05T23:58:30.208084128Z" level=error msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" failed" error="failed to destroy network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.208748 kubelet[2614]: E0905 23:58:30.208596 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:30.208748 kubelet[2614]: E0905 23:58:30.208645 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd"} Sep 5 23:58:30.208748 kubelet[2614]: E0905 23:58:30.208686 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5bacc15-80ca-43c3-bafd-f08e810b113d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.208748 kubelet[2614]: E0905 23:58:30.208708 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5bacc15-80ca-43c3-bafd-f08e810b113d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" podUID="d5bacc15-80ca-43c3-bafd-f08e810b113d" Sep 5 23:58:30.211009 containerd[1480]: time="2025-09-05T23:58:30.210851033Z" level=error msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" failed" error="failed to destroy network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.211227 containerd[1480]: time="2025-09-05T23:58:30.210989390Z" level=error msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" failed" error="failed to destroy network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.211763 kubelet[2614]: E0905 23:58:30.211532 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:30.211763 kubelet[2614]: E0905 23:58:30.211583 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471"} Sep 5 23:58:30.211763 kubelet[2614]: E0905 23:58:30.211616 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c543c41c-ee3f-499e-8d6b-b62b005decb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.211763 kubelet[2614]: E0905 23:58:30.211637 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c543c41c-ee3f-499e-8d6b-b62b005decb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8kktj" podUID="c543c41c-ee3f-499e-8d6b-b62b005decb4" Sep 5 23:58:30.212034 kubelet[2614]: E0905 23:58:30.211671 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:30.212034 kubelet[2614]: E0905 23:58:30.211685 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75"} Sep 5 23:58:30.212034 kubelet[2614]: E0905 23:58:30.211699 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.212034 kubelet[2614]: E0905 23:58:30.211724 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" podUID="cbe0b2e4-eb4a-4cc8-acdc-005b19facc59" Sep 5 23:58:30.221639 containerd[1480]: time="2025-09-05T23:58:30.221573860Z" level=error msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" failed" error="failed to destroy network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.222310 kubelet[2614]: E0905 23:58:30.221944 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:30.222310 kubelet[2614]: E0905 23:58:30.222016 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae"} Sep 5 23:58:30.222310 kubelet[2614]: E0905 23:58:30.222048 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84038953-2a1d-453b-a995-2e6cd5cc7120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.222310 kubelet[2614]: E0905 23:58:30.222069 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84038953-2a1d-453b-a995-2e6cd5cc7120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" podUID="84038953-2a1d-453b-a995-2e6cd5cc7120" Sep 5 23:58:30.223821 containerd[1480]: time="2025-09-05T23:58:30.223765216Z" level=error msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" failed" error="failed to destroy network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.224391 kubelet[2614]: E0905 23:58:30.224184 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:30.224391 kubelet[2614]: E0905 23:58:30.224292 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76"} Sep 5 23:58:30.224391 kubelet[2614]: E0905 23:58:30.224324 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc1e0fa4-e565-4b6d-a320-3ab660954c63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.224391 kubelet[2614]: E0905 23:58:30.224365 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc1e0fa4-e565-4b6d-a320-3ab660954c63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kgz7b" podUID="fc1e0fa4-e565-4b6d-a320-3ab660954c63" Sep 5 23:58:30.228546 containerd[1480]: time="2025-09-05T23:58:30.228486802Z" level=error msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" failed" error="failed to destroy network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.229244 kubelet[2614]: E0905 23:58:30.229081 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:30.229244 kubelet[2614]: E0905 23:58:30.229138 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727"} Sep 5 23:58:30.229244 kubelet[2614]: E0905 23:58:30.229169 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79a2c532-c00e-4e75-a9f5-90ed8209a139\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.229244 kubelet[2614]: E0905 23:58:30.229190 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79a2c532-c00e-4e75-a9f5-90ed8209a139\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-t7j5b" podUID="79a2c532-c00e-4e75-a9f5-90ed8209a139" Sep 5 23:58:30.232818 containerd[1480]: time="2025-09-05T23:58:30.232767837Z" level=error msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" failed" error="failed to destroy network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:58:30.233400 kubelet[2614]: E0905 23:58:30.233239 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:30.233400 kubelet[2614]: E0905 23:58:30.233298 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555"} Sep 5 23:58:30.233400 kubelet[2614]: E0905 23:58:30.233340 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"62b3a3df-9267-4dc2-9423-4fc82ad97b42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:58:30.233400 kubelet[2614]: E0905 23:58:30.233361 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"62b3a3df-9267-4dc2-9423-4fc82ad97b42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" podUID="62b3a3df-9267-4dc2-9423-4fc82ad97b42" Sep 5 23:58:35.820680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587146415.mount: Deactivated successfully. Sep 5 23:58:35.853339 containerd[1480]: time="2025-09-05T23:58:35.853262532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:35.854851 containerd[1480]: time="2025-09-05T23:58:35.854620226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:58:35.855956 containerd[1480]: time="2025-09-05T23:58:35.855736926Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:35.858501 containerd[1480]: time="2025-09-05T23:58:35.858460954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:35.859267 containerd[1480]: time="2025-09-05T23:58:35.859222820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.801967785s" Sep 5 23:58:35.859267 containerd[1480]: time="2025-09-05T23:58:35.859265019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:58:35.881496 containerd[1480]: time="2025-09-05T23:58:35.881312486Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:58:35.901505 containerd[1480]: time="2025-09-05T23:58:35.901400989Z" level=info msg="CreateContainer within sandbox \"de927eb8ca2216a6ff4f354b502fe6b3eccc0866d1a19df538380507aa373388\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac\"" Sep 5 23:58:35.902763 containerd[1480]: time="2025-09-05T23:58:35.902678525Z" level=info msg="StartContainer for \"5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac\"" Sep 5 23:58:35.937682 systemd[1]: Started cri-containerd-5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac.scope - libcontainer container 5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac. Sep 5 23:58:35.981376 containerd[1480]: time="2025-09-05T23:58:35.981276292Z" level=info msg="StartContainer for \"5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac\" returns successfully" Sep 5 23:58:36.130704 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:58:36.130844 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:58:36.178445 kubelet[2614]: I0905 23:58:36.178193 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p7bdn" podStartSLOduration=1.676845878 podStartE2EDuration="18.178170113s" podCreationTimestamp="2025-09-05 23:58:18 +0000 UTC" firstStartedPulling="2025-09-05 23:58:19.359505915 +0000 UTC m=+21.598658304" lastFinishedPulling="2025-09-05 23:58:35.86083015 +0000 UTC m=+38.099982539" observedRunningTime="2025-09-05 23:58:36.174600859 +0000 UTC m=+38.413753248" watchObservedRunningTime="2025-09-05 23:58:36.178170113 +0000 UTC m=+38.417322502" Sep 5 23:58:36.296613 containerd[1480]: time="2025-09-05T23:58:36.296535276Z" level=info msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.423 [INFO][3890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.423 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" iface="eth0" netns="/var/run/netns/cni-738fdddf-8ccd-82ee-4772-4e95814928d4" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.423 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" iface="eth0" netns="/var/run/netns/cni-738fdddf-8ccd-82ee-4772-4e95814928d4" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.425 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" iface="eth0" netns="/var/run/netns/cni-738fdddf-8ccd-82ee-4772-4e95814928d4" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.425 [INFO][3890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.425 [INFO][3890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.476 [INFO][3898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.476 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.476 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.488 [WARNING][3898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.488 [INFO][3898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.491 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:36.498388 containerd[1480]: 2025-09-05 23:58:36.496 [INFO][3890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:36.499285 containerd[1480]: time="2025-09-05T23:58:36.498612525Z" level=info msg="TearDown network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" successfully" Sep 5 23:58:36.499285 containerd[1480]: time="2025-09-05T23:58:36.498651724Z" level=info msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" returns successfully" Sep 5 23:58:36.621694 kubelet[2614]: I0905 23:58:36.620873 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-ca-bundle\") pod \"0cbb4f78-12a8-4979-8eea-c1da54e61850\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " Sep 5 23:58:36.621694 kubelet[2614]: I0905 23:58:36.621754 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldb92\" (UniqueName: \"kubernetes.io/projected/0cbb4f78-12a8-4979-8eea-c1da54e61850-kube-api-access-ldb92\") pod \"0cbb4f78-12a8-4979-8eea-c1da54e61850\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " Sep 5 23:58:36.621694 kubelet[2614]: I0905 23:58:36.621814 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-backend-key-pair\") pod \"0cbb4f78-12a8-4979-8eea-c1da54e61850\" (UID: \"0cbb4f78-12a8-4979-8eea-c1da54e61850\") " Sep 5 23:58:36.621694 kubelet[2614]: I0905 23:58:36.621319 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0cbb4f78-12a8-4979-8eea-c1da54e61850" (UID: "0cbb4f78-12a8-4979-8eea-c1da54e61850"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:58:36.629654 kubelet[2614]: I0905 23:58:36.629595 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0cbb4f78-12a8-4979-8eea-c1da54e61850" (UID: "0cbb4f78-12a8-4979-8eea-c1da54e61850"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:58:36.630944 kubelet[2614]: I0905 23:58:36.630895 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cbb4f78-12a8-4979-8eea-c1da54e61850-kube-api-access-ldb92" (OuterVolumeSpecName: "kube-api-access-ldb92") pod "0cbb4f78-12a8-4979-8eea-c1da54e61850" (UID: "0cbb4f78-12a8-4979-8eea-c1da54e61850"). InnerVolumeSpecName "kube-api-access-ldb92". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:58:36.723509 kubelet[2614]: I0905 23:58:36.723405 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-ca-bundle\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:58:36.723509 kubelet[2614]: I0905 23:58:36.723500 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldb92\" (UniqueName: \"kubernetes.io/projected/0cbb4f78-12a8-4979-8eea-c1da54e61850-kube-api-access-ldb92\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:58:36.723780 kubelet[2614]: I0905 23:58:36.723526 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cbb4f78-12a8-4979-8eea-c1da54e61850-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:58:36.821103 systemd[1]: run-netns-cni\x2d738fdddf\x2d8ccd\x2d82ee\x2d4772\x2d4e95814928d4.mount: Deactivated successfully. Sep 5 23:58:36.821266 systemd[1]: var-lib-kubelet-pods-0cbb4f78\x2d12a8\x2d4979\x2d8eea\x2dc1da54e61850-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldb92.mount: Deactivated successfully. Sep 5 23:58:36.821371 systemd[1]: var-lib-kubelet-pods-0cbb4f78\x2d12a8\x2d4979\x2d8eea\x2dc1da54e61850-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:58:37.153524 kubelet[2614]: I0905 23:58:37.152818 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:58:37.157779 systemd[1]: Removed slice kubepods-besteffort-pod0cbb4f78_12a8_4979_8eea_c1da54e61850.slice - libcontainer container kubepods-besteffort-pod0cbb4f78_12a8_4979_8eea_c1da54e61850.slice. Sep 5 23:58:37.256286 systemd[1]: Created slice kubepods-besteffort-pod8d12431a_1317_4e03_8be6_f9906b841ac6.slice - libcontainer container kubepods-besteffort-pod8d12431a_1317_4e03_8be6_f9906b841ac6.slice. Sep 5 23:58:37.328705 kubelet[2614]: I0905 23:58:37.328617 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d12431a-1317-4e03-8be6-f9906b841ac6-whisker-backend-key-pair\") pod \"whisker-97dbd5c9f-6p7w6\" (UID: \"8d12431a-1317-4e03-8be6-f9906b841ac6\") " pod="calico-system/whisker-97dbd5c9f-6p7w6" Sep 5 23:58:37.328705 kubelet[2614]: I0905 23:58:37.328726 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d12431a-1317-4e03-8be6-f9906b841ac6-whisker-ca-bundle\") pod \"whisker-97dbd5c9f-6p7w6\" (UID: \"8d12431a-1317-4e03-8be6-f9906b841ac6\") " pod="calico-system/whisker-97dbd5c9f-6p7w6" Sep 5 23:58:37.329545 kubelet[2614]: I0905 23:58:37.328808 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrmv\" (UniqueName: \"kubernetes.io/projected/8d12431a-1317-4e03-8be6-f9906b841ac6-kube-api-access-gfrmv\") pod \"whisker-97dbd5c9f-6p7w6\" (UID: \"8d12431a-1317-4e03-8be6-f9906b841ac6\") " pod="calico-system/whisker-97dbd5c9f-6p7w6" Sep 5 23:58:37.564370 containerd[1480]: time="2025-09-05T23:58:37.563761450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97dbd5c9f-6p7w6,Uid:8d12431a-1317-4e03-8be6-f9906b841ac6,Namespace:calico-system,Attempt:0,}" Sep 5 23:58:37.731450 systemd-networkd[1376]: cali69221038243: Link UP Sep 5 23:58:37.733146 systemd-networkd[1376]: cali69221038243: Gained carrier Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.604 [INFO][3919] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.623 [INFO][3919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0 whisker-97dbd5c9f- calico-system 8d12431a-1317-4e03-8be6-f9906b841ac6 959 0 2025-09-05 23:58:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:97dbd5c9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 whisker-97dbd5c9f-6p7w6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali69221038243 [] [] }} ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.624 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.652 [INFO][3932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" HandleID="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.652 [INFO][3932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" HandleID="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"whisker-97dbd5c9f-6p7w6", "timestamp":"2025-09-05 23:58:37.652537258 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.652 [INFO][3932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.652 [INFO][3932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.652 [INFO][3932] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.669 [INFO][3932] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.677 [INFO][3932] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.688 [INFO][3932] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.694 [INFO][3932] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.697 [INFO][3932] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.697 [INFO][3932] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.700 [INFO][3932] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9 Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.706 [INFO][3932] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.714 [INFO][3932] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.1/26] block=192.168.102.0/26 handle="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.714 [INFO][3932] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.1/26] handle="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.714 [INFO][3932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:37.771816 containerd[1480]: 2025-09-05 23:58:37.714 [INFO][3932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.1/26] IPv6=[] ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" HandleID="k8s-pod-network.4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.720 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0", GenerateName:"whisker-97dbd5c9f-", Namespace:"calico-system", SelfLink:"", UID:"8d12431a-1317-4e03-8be6-f9906b841ac6", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97dbd5c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"whisker-97dbd5c9f-6p7w6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali69221038243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.720 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.1/32] ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.720 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69221038243 ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.740 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.743 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0", GenerateName:"whisker-97dbd5c9f-", Namespace:"calico-system", SelfLink:"", UID:"8d12431a-1317-4e03-8be6-f9906b841ac6", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97dbd5c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9", Pod:"whisker-97dbd5c9f-6p7w6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali69221038243", MAC:"fe:6d:82:93:4c:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:37.773529 containerd[1480]: 2025-09-05 23:58:37.764 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9" Namespace="calico-system" Pod="whisker-97dbd5c9f-6p7w6" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--97dbd5c9f--6p7w6-eth0" Sep 5 23:58:37.813962 containerd[1480]: time="2025-09-05T23:58:37.813639656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:37.813962 containerd[1480]: time="2025-09-05T23:58:37.813747614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:37.813962 containerd[1480]: time="2025-09-05T23:58:37.813758973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:37.814572 containerd[1480]: time="2025-09-05T23:58:37.814495880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:37.860517 systemd[1]: Started cri-containerd-4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9.scope - libcontainer container 4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9. Sep 5 23:58:37.924070 kubelet[2614]: I0905 23:58:37.924015 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cbb4f78-12a8-4979-8eea-c1da54e61850" path="/var/lib/kubelet/pods/0cbb4f78-12a8-4979-8eea-c1da54e61850/volumes" Sep 5 23:58:37.952181 containerd[1480]: time="2025-09-05T23:58:37.952124949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97dbd5c9f-6p7w6,Uid:8d12431a-1317-4e03-8be6-f9906b841ac6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9\"" Sep 5 23:58:37.957783 containerd[1480]: time="2025-09-05T23:58:37.957622368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:58:38.260361 kubelet[2614]: I0905 23:58:38.259852 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:58:38.988833 systemd-networkd[1376]: cali69221038243: Gained IPv6LL Sep 5 23:58:39.438452 kernel: bpftool[4133]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:58:39.629086 systemd-networkd[1376]: vxlan.calico: Link UP Sep 5 23:58:39.629709 systemd-networkd[1376]: vxlan.calico: Gained carrier Sep 5 23:58:40.584766 containerd[1480]: time="2025-09-05T23:58:40.583858987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:40.584766 containerd[1480]: time="2025-09-05T23:58:40.584718772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:58:40.585463 containerd[1480]: time="2025-09-05T23:58:40.585438839Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:40.588084 containerd[1480]: time="2025-09-05T23:58:40.588049712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:40.589005 containerd[1480]: time="2025-09-05T23:58:40.588923616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.631258289s" Sep 5 23:58:40.589125 containerd[1480]: time="2025-09-05T23:58:40.589107053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:58:40.594813 containerd[1480]: time="2025-09-05T23:58:40.594769591Z" level=info msg="CreateContainer within sandbox \"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:58:40.615092 containerd[1480]: time="2025-09-05T23:58:40.615036308Z" level=info msg="CreateContainer within sandbox \"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1d670ef09c3d14b36f51d46a149475067afff2f86162ccdec14f4a56724544ec\"" Sep 5 23:58:40.617327 containerd[1480]: time="2025-09-05T23:58:40.617270748Z" level=info msg="StartContainer for \"1d670ef09c3d14b36f51d46a149475067afff2f86162ccdec14f4a56724544ec\"" Sep 5 23:58:40.664899 systemd[1]: Started cri-containerd-1d670ef09c3d14b36f51d46a149475067afff2f86162ccdec14f4a56724544ec.scope - libcontainer container 1d670ef09c3d14b36f51d46a149475067afff2f86162ccdec14f4a56724544ec. Sep 5 23:58:40.716347 containerd[1480]: time="2025-09-05T23:58:40.716266893Z" level=info msg="StartContainer for \"1d670ef09c3d14b36f51d46a149475067afff2f86162ccdec14f4a56724544ec\" returns successfully" Sep 5 23:58:40.720141 containerd[1480]: time="2025-09-05T23:58:40.720104065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:58:41.070353 kubelet[2614]: I0905 23:58:41.070046 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:58:41.292600 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Sep 5 23:58:41.607301 systemd[1]: run-containerd-runc-k8s.io-5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac-runc.iXv5uz.mount: Deactivated successfully. Sep 5 23:58:41.908912 containerd[1480]: time="2025-09-05T23:58:41.907270287Z" level=info msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" Sep 5 23:58:41.908912 containerd[1480]: time="2025-09-05T23:58:41.907901007Z" level=info msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.976 [INFO][4308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.977 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" iface="eth0" netns="/var/run/netns/cni-5d6cad0b-0ec9-af3f-164d-2d4fe0a60eac" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.977 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" iface="eth0" netns="/var/run/netns/cni-5d6cad0b-0ec9-af3f-164d-2d4fe0a60eac" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.978 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" iface="eth0" netns="/var/run/netns/cni-5d6cad0b-0ec9-af3f-164d-2d4fe0a60eac" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.978 [INFO][4308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:41.978 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.009 [INFO][4321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.011 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.011 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.023 [WARNING][4321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.023 [INFO][4321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.025 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:42.032496 containerd[1480]: 2025-09-05 23:58:42.026 [INFO][4308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:42.032496 containerd[1480]: time="2025-09-05T23:58:42.030570867Z" level=info msg="TearDown network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" successfully" Sep 5 23:58:42.032496 containerd[1480]: time="2025-09-05T23:58:42.030598871Z" level=info msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" returns successfully" Sep 5 23:58:42.038503 systemd[1]: run-netns-cni\x2d5d6cad0b\x2d0ec9\x2daf3f\x2d164d\x2d2d4fe0a60eac.mount: Deactivated successfully. Sep 5 23:58:42.052214 containerd[1480]: time="2025-09-05T23:58:42.052161069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kktj,Uid:c543c41c-ee3f-499e-8d6b-b62b005decb4,Namespace:kube-system,Attempt:1,}" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.980 [INFO][4307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.980 [INFO][4307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" iface="eth0" netns="/var/run/netns/cni-d09f66ab-1e03-eced-9859-0e9267db5354" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.983 [INFO][4307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" iface="eth0" netns="/var/run/netns/cni-d09f66ab-1e03-eced-9859-0e9267db5354" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.983 [INFO][4307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" iface="eth0" netns="/var/run/netns/cni-d09f66ab-1e03-eced-9859-0e9267db5354" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.983 [INFO][4307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:41.983 [INFO][4307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.013 [INFO][4326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.013 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.025 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.043 [WARNING][4326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.043 [INFO][4326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.047 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:42.055311 containerd[1480]: 2025-09-05 23:58:42.051 [INFO][4307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:42.057688 containerd[1480]: time="2025-09-05T23:58:42.057530126Z" level=info msg="TearDown network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" successfully" Sep 5 23:58:42.057688 containerd[1480]: time="2025-09-05T23:58:42.057572771Z" level=info msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" returns successfully" Sep 5 23:58:42.059431 containerd[1480]: time="2025-09-05T23:58:42.059154965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tcpm,Uid:b841a2a5-b2e2-4dd3-a133-b08f780b324f,Namespace:calico-system,Attempt:1,}" Sep 5 23:58:42.059704 systemd[1]: run-netns-cni\x2dd09f66ab\x2d1e03\x2deced\x2d9859\x2d0e9267db5354.mount: Deactivated successfully. Sep 5 23:58:42.242707 systemd-networkd[1376]: cali256f81effc7: Link UP Sep 5 23:58:42.244034 systemd-networkd[1376]: cali256f81effc7: Gained carrier Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.126 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0 coredns-674b8bbfcf- kube-system c543c41c-ee3f-499e-8d6b-b62b005decb4 987 0 2025-09-05 23:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 coredns-674b8bbfcf-8kktj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali256f81effc7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.126 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.172 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" HandleID="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.172 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" HandleID="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"coredns-674b8bbfcf-8kktj", "timestamp":"2025-09-05 23:58:42.172088185 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.172 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.172 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.172 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.188 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.195 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.202 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.205 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.209 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.209 [INFO][4360] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.211 [INFO][4360] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7 Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.219 [INFO][4360] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.228 [INFO][4360] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.2/26] block=192.168.102.0/26 handle="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.228 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.2/26] handle="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.228 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:42.268933 containerd[1480]: 2025-09-05 23:58:42.228 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.2/26] IPv6=[] ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" HandleID="k8s-pod-network.8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.233 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c543c41c-ee3f-499e-8d6b-b62b005decb4", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"coredns-674b8bbfcf-8kktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f81effc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.233 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.2/32] ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.233 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali256f81effc7 ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.248 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.251 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c543c41c-ee3f-499e-8d6b-b62b005decb4", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7", Pod:"coredns-674b8bbfcf-8kktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f81effc7", MAC:"e6:96:7a:23:48:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:42.269528 containerd[1480]: 2025-09-05 23:58:42.264 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kktj" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:42.291747 containerd[1480]: time="2025-09-05T23:58:42.291210682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:42.291886 containerd[1480]: time="2025-09-05T23:58:42.291757269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:42.291886 containerd[1480]: time="2025-09-05T23:58:42.291806995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:42.292268 containerd[1480]: time="2025-09-05T23:58:42.291969335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:42.338677 systemd[1]: Started cri-containerd-8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7.scope - libcontainer container 8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7. Sep 5 23:58:42.383014 systemd-networkd[1376]: cali44819cff664: Link UP Sep 5 23:58:42.384530 systemd-networkd[1376]: cali44819cff664: Gained carrier Sep 5 23:58:42.415012 containerd[1480]: time="2025-09-05T23:58:42.400283190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kktj,Uid:c543c41c-ee3f-499e-8d6b-b62b005decb4,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7\"" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.128 [INFO][4337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0 csi-node-driver- calico-system b841a2a5-b2e2-4dd3-a133-b08f780b324f 988 0 2025-09-05 23:58:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 csi-node-driver-7tcpm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali44819cff664 [] [] }} ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.128 [INFO][4337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.177 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" HandleID="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.178 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" HandleID="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"csi-node-driver-7tcpm", "timestamp":"2025-09-05 23:58:42.176764877 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.178 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.229 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.229 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.290 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.303 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.315 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.320 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.327 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.327 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.340 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.353 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.369 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.3/26] block=192.168.102.0/26 handle="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.370 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.3/26] handle="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.370 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:42.418029 containerd[1480]: 2025-09-05 23:58:42.370 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.3/26] IPv6=[] ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" HandleID="k8s-pod-network.ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.376 [INFO][4337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b841a2a5-b2e2-4dd3-a133-b08f780b324f", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"csi-node-driver-7tcpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44819cff664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.377 [INFO][4337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.3/32] ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.377 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44819cff664 ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.384 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.388 [INFO][4337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b841a2a5-b2e2-4dd3-a133-b08f780b324f", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e", Pod:"csi-node-driver-7tcpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44819cff664", MAC:"82:d4:70:c5:fe:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:42.419123 containerd[1480]: 2025-09-05 23:58:42.407 [INFO][4337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e" Namespace="calico-system" Pod="csi-node-driver-7tcpm" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:42.427039 containerd[1480]: time="2025-09-05T23:58:42.426372743Z" level=info msg="CreateContainer within sandbox \"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:58:42.451993 containerd[1480]: time="2025-09-05T23:58:42.451684200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:42.451993 containerd[1480]: time="2025-09-05T23:58:42.451751288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:42.451993 containerd[1480]: time="2025-09-05T23:58:42.451839139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:42.451993 containerd[1480]: time="2025-09-05T23:58:42.451936351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:42.457210 containerd[1480]: time="2025-09-05T23:58:42.457067579Z" level=info msg="CreateContainer within sandbox \"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d10e83164fcae5437e8d3d859c524af505ed017d5e4f46cce4334ecf89c49a84\"" Sep 5 23:58:42.458223 containerd[1480]: time="2025-09-05T23:58:42.458151671Z" level=info msg="StartContainer for \"d10e83164fcae5437e8d3d859c524af505ed017d5e4f46cce4334ecf89c49a84\"" Sep 5 23:58:42.479657 systemd[1]: Started cri-containerd-ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e.scope - libcontainer container ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e. Sep 5 23:58:42.503678 systemd[1]: Started cri-containerd-d10e83164fcae5437e8d3d859c524af505ed017d5e4f46cce4334ecf89c49a84.scope - libcontainer container d10e83164fcae5437e8d3d859c524af505ed017d5e4f46cce4334ecf89c49a84. Sep 5 23:58:42.540292 containerd[1480]: time="2025-09-05T23:58:42.540237997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tcpm,Uid:b841a2a5-b2e2-4dd3-a133-b08f780b324f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e\"" Sep 5 23:58:42.552467 containerd[1480]: time="2025-09-05T23:58:42.552402005Z" level=info msg="StartContainer for \"d10e83164fcae5437e8d3d859c524af505ed017d5e4f46cce4334ecf89c49a84\" returns successfully" Sep 5 23:58:42.910030 containerd[1480]: time="2025-09-05T23:58:42.908899151Z" level=info msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.000 [INFO][4527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.000 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" iface="eth0" netns="/var/run/netns/cni-d18fd6c7-433f-9e05-9f2e-c9354775238d" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.000 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" iface="eth0" netns="/var/run/netns/cni-d18fd6c7-433f-9e05-9f2e-c9354775238d" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.001 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" iface="eth0" netns="/var/run/netns/cni-d18fd6c7-433f-9e05-9f2e-c9354775238d" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.001 [INFO][4527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.001 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.040 [INFO][4534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.041 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.041 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.054 [WARNING][4534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.054 [INFO][4534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.057 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:43.062582 containerd[1480]: 2025-09-05 23:58:43.059 [INFO][4527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:43.066800 containerd[1480]: time="2025-09-05T23:58:43.062897441Z" level=info msg="TearDown network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" successfully" Sep 5 23:58:43.066800 containerd[1480]: time="2025-09-05T23:58:43.062927004Z" level=info msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" returns successfully" Sep 5 23:58:43.066800 containerd[1480]: time="2025-09-05T23:58:43.066604520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb67bb5b6-b9xqc,Uid:62b3a3df-9267-4dc2-9423-4fc82ad97b42,Namespace:calico-system,Attempt:1,}" Sep 5 23:58:43.069873 systemd[1]: run-netns-cni\x2dd18fd6c7\x2d433f\x2d9e05\x2d9f2e\x2dc9354775238d.mount: Deactivated successfully. Sep 5 23:58:43.230931 kubelet[2614]: I0905 23:58:43.230021 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8kktj" podStartSLOduration=40.230000651 podStartE2EDuration="40.230000651s" podCreationTimestamp="2025-09-05 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:58:43.226024979 +0000 UTC m=+45.465177408" watchObservedRunningTime="2025-09-05 23:58:43.230000651 +0000 UTC m=+45.469153040" Sep 5 23:58:43.301808 systemd-networkd[1376]: calif2677de3acf: Link UP Sep 5 23:58:43.309546 systemd-networkd[1376]: calif2677de3acf: Gained carrier Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.136 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0 calico-kube-controllers-7cb67bb5b6- calico-system 62b3a3df-9267-4dc2-9423-4fc82ad97b42 1001 0 2025-09-05 23:58:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cb67bb5b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 calico-kube-controllers-7cb67bb5b6-b9xqc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2677de3acf [] [] }} ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.137 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.177 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" HandleID="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.178 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" HandleID="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"calico-kube-controllers-7cb67bb5b6-b9xqc", "timestamp":"2025-09-05 23:58:43.177819625 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.178 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.178 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.178 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.193 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.206 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.220 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.228 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.238 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.239 [INFO][4553] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.248 [INFO][4553] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469 Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.263 [INFO][4553] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.283 [INFO][4553] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.4/26] block=192.168.102.0/26 handle="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.284 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.4/26] handle="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.284 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:43.330715 containerd[1480]: 2025-09-05 23:58:43.284 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.4/26] IPv6=[] ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" HandleID="k8s-pod-network.574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.292 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0", GenerateName:"calico-kube-controllers-7cb67bb5b6-", Namespace:"calico-system", SelfLink:"", UID:"62b3a3df-9267-4dc2-9423-4fc82ad97b42", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb67bb5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"calico-kube-controllers-7cb67bb5b6-b9xqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2677de3acf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.292 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.4/32] ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.293 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2677de3acf ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.309 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.310 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0", GenerateName:"calico-kube-controllers-7cb67bb5b6-", Namespace:"calico-system", SelfLink:"", UID:"62b3a3df-9267-4dc2-9423-4fc82ad97b42", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb67bb5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469", Pod:"calico-kube-controllers-7cb67bb5b6-b9xqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2677de3acf", MAC:"4e:38:b4:f1:91:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:43.331914 containerd[1480]: 2025-09-05 23:58:43.325 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469" Namespace="calico-system" Pod="calico-kube-controllers-7cb67bb5b6-b9xqc" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:43.375138 containerd[1480]: time="2025-09-05T23:58:43.374584391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:43.376139 containerd[1480]: time="2025-09-05T23:58:43.374639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:43.376139 containerd[1480]: time="2025-09-05T23:58:43.375504460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:43.376139 containerd[1480]: time="2025-09-05T23:58:43.375621514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:43.406789 systemd[1]: Started cri-containerd-574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469.scope - libcontainer container 574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469. Sep 5 23:58:43.465114 containerd[1480]: time="2025-09-05T23:58:43.465032394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb67bb5b6-b9xqc,Uid:62b3a3df-9267-4dc2-9423-4fc82ad97b42,Namespace:calico-system,Attempt:1,} returns sandbox id \"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469\"" Sep 5 23:58:43.522700 containerd[1480]: time="2025-09-05T23:58:43.521627663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:43.523811 containerd[1480]: time="2025-09-05T23:58:43.523592376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:58:43.525280 containerd[1480]: time="2025-09-05T23:58:43.524984501Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:43.528356 containerd[1480]: time="2025-09-05T23:58:43.528312136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:43.529245 containerd[1480]: time="2025-09-05T23:58:43.529201281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.809057096s" Sep 5 23:58:43.529245 containerd[1480]: time="2025-09-05T23:58:43.529242566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:58:43.530558 containerd[1480]: time="2025-09-05T23:58:43.530451069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:58:43.535397 containerd[1480]: time="2025-09-05T23:58:43.535107741Z" level=info msg="CreateContainer within sandbox \"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:58:43.550781 containerd[1480]: time="2025-09-05T23:58:43.550689308Z" level=info msg="CreateContainer within sandbox \"4bcfc4be2d9880d541d36cd045b2c5e8f0defa0ed9b330f9eb988ecc2e420ae9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c20f7efdc5e812dff8c43dfef1bc65ffbc214f66e64a09a6749966b69fd3f9fa\"" Sep 5 23:58:43.553732 containerd[1480]: time="2025-09-05T23:58:43.553311219Z" level=info msg="StartContainer for \"c20f7efdc5e812dff8c43dfef1bc65ffbc214f66e64a09a6749966b69fd3f9fa\"" Sep 5 23:58:43.586640 systemd[1]: Started cri-containerd-c20f7efdc5e812dff8c43dfef1bc65ffbc214f66e64a09a6749966b69fd3f9fa.scope - libcontainer container c20f7efdc5e812dff8c43dfef1bc65ffbc214f66e64a09a6749966b69fd3f9fa. Sep 5 23:58:43.608526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097632571.mount: Deactivated successfully. Sep 5 23:58:43.641806 containerd[1480]: time="2025-09-05T23:58:43.641702378Z" level=info msg="StartContainer for \"c20f7efdc5e812dff8c43dfef1bc65ffbc214f66e64a09a6749966b69fd3f9fa\" returns successfully" Sep 5 23:58:44.109022 systemd-networkd[1376]: cali256f81effc7: Gained IPv6LL Sep 5 23:58:44.302245 systemd-networkd[1376]: cali44819cff664: Gained IPv6LL Sep 5 23:58:44.684983 systemd-networkd[1376]: calif2677de3acf: Gained IPv6LL Sep 5 23:58:44.905902 containerd[1480]: time="2025-09-05T23:58:44.905815486Z" level=info msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" Sep 5 23:58:44.907659 containerd[1480]: time="2025-09-05T23:58:44.907619413Z" level=info msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" Sep 5 23:58:44.998798 kubelet[2614]: I0905 23:58:44.998610 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-97dbd5c9f-6p7w6" podStartSLOduration=2.425380906 podStartE2EDuration="7.998581899s" podCreationTimestamp="2025-09-05 23:58:37 +0000 UTC" firstStartedPulling="2025-09-05 23:58:37.957099098 +0000 UTC m=+40.196251447" lastFinishedPulling="2025-09-05 23:58:43.530300091 +0000 UTC m=+45.769452440" observedRunningTime="2025-09-05 23:58:44.231919211 +0000 UTC m=+46.471071600" watchObservedRunningTime="2025-09-05 23:58:44.998581899 +0000 UTC m=+47.237734288" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.003 [INFO][4676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.005 [INFO][4676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" iface="eth0" netns="/var/run/netns/cni-e1f49c7d-7172-e2b9-0ccf-50bf9fc5ecb5" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.006 [INFO][4676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" iface="eth0" netns="/var/run/netns/cni-e1f49c7d-7172-e2b9-0ccf-50bf9fc5ecb5" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.009 [INFO][4676] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" iface="eth0" netns="/var/run/netns/cni-e1f49c7d-7172-e2b9-0ccf-50bf9fc5ecb5" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.010 [INFO][4676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.010 [INFO][4676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.068 [INFO][4693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.070 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.070 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.095 [WARNING][4693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.095 [INFO][4693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.102 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:45.110638 containerd[1480]: 2025-09-05 23:58:45.106 [INFO][4676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:58:45.111160 containerd[1480]: time="2025-09-05T23:58:45.110812633Z" level=info msg="TearDown network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" successfully" Sep 5 23:58:45.111160 containerd[1480]: time="2025-09-05T23:58:45.110855917Z" level=info msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" returns successfully" Sep 5 23:58:45.119535 containerd[1480]: time="2025-09-05T23:58:45.116088180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-t7j5b,Uid:79a2c532-c00e-4e75-a9f5-90ed8209a139,Namespace:calico-system,Attempt:1,}" Sep 5 23:58:45.117312 systemd[1]: run-netns-cni\x2de1f49c7d\x2d7172\x2de2b9\x2d0ccf\x2d50bf9fc5ecb5.mount: Deactivated successfully. Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.045 [INFO][4677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.046 [INFO][4677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" iface="eth0" netns="/var/run/netns/cni-f187f46d-f535-2371-d76e-0696c95a22e4" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.046 [INFO][4677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" iface="eth0" netns="/var/run/netns/cni-f187f46d-f535-2371-d76e-0696c95a22e4" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.047 [INFO][4677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" iface="eth0" netns="/var/run/netns/cni-f187f46d-f535-2371-d76e-0696c95a22e4" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.047 [INFO][4677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.047 [INFO][4677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.087 [INFO][4700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.091 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.102 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.119 [WARNING][4700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.120 [INFO][4700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.142 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:45.157067 containerd[1480]: 2025-09-05 23:58:45.149 [INFO][4677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:45.164389 containerd[1480]: time="2025-09-05T23:58:45.161553318Z" level=info msg="TearDown network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" successfully" Sep 5 23:58:45.164389 containerd[1480]: time="2025-09-05T23:58:45.161602843Z" level=info msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" returns successfully" Sep 5 23:58:45.163143 systemd[1]: run-netns-cni\x2df187f46d\x2df535\x2d2371\x2dd76e\x2d0696c95a22e4.mount: Deactivated successfully. Sep 5 23:58:45.168309 containerd[1480]: time="2025-09-05T23:58:45.167949230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-pmqf8,Uid:cbe0b2e4-eb4a-4cc8-acdc-005b19facc59,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:58:45.440490 systemd-networkd[1376]: cali479b03d2385: Link UP Sep 5 23:58:45.449155 systemd-networkd[1376]: cali479b03d2385: Gained carrier Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.280 [INFO][4709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0 goldmane-54d579b49d- calico-system 79a2c532-c00e-4e75-a9f5-90ed8209a139 1028 0 2025-09-05 23:58:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 goldmane-54d579b49d-t7j5b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali479b03d2385 [] [] }} ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.280 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.352 [INFO][4732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" HandleID="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.352 [INFO][4732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" HandleID="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d31c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"goldmane-54d579b49d-t7j5b", "timestamp":"2025-09-05 23:58:45.352520005 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.352 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.353 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.353 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.373 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.383 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.390 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.394 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.398 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.398 [INFO][4732] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.402 [INFO][4732] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.415 [INFO][4732] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.423 [INFO][4732] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.5/26] block=192.168.102.0/26 handle="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.423 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.5/26] handle="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.423 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:45.475167 containerd[1480]: 2025-09-05 23:58:45.423 [INFO][4732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.5/26] IPv6=[] ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" HandleID="k8s-pod-network.0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.427 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"79a2c532-c00e-4e75-a9f5-90ed8209a139", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"goldmane-54d579b49d-t7j5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali479b03d2385", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.427 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.5/32] ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.427 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali479b03d2385 ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.450 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.452 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"79a2c532-c00e-4e75-a9f5-90ed8209a139", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a", Pod:"goldmane-54d579b49d-t7j5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali479b03d2385", MAC:"ca:88:44:9d:2b:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:45.477323 containerd[1480]: 2025-09-05 23:58:45.471 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a" Namespace="calico-system" Pod="goldmane-54d579b49d-t7j5b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:58:45.515937 containerd[1480]: time="2025-09-05T23:58:45.515348121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:45.515937 containerd[1480]: time="2025-09-05T23:58:45.515410368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:45.515937 containerd[1480]: time="2025-09-05T23:58:45.515438851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:45.516591 containerd[1480]: time="2025-09-05T23:58:45.515544383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:45.548906 systemd[1]: Started cri-containerd-0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a.scope - libcontainer container 0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a. Sep 5 23:58:45.573315 systemd-networkd[1376]: calie6a72fc5368: Link UP Sep 5 23:58:45.575438 systemd-networkd[1376]: calie6a72fc5368: Gained carrier Sep 5 23:58:45.614918 systemd[1]: Started sshd@7-138.199.175.7:22-43.134.41.225:56610.service - OpenSSH per-connection server daemon (43.134.41.225:56610). Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.316 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0 calico-apiserver-648b95987d- calico-apiserver cbe0b2e4-eb4a-4cc8-acdc-005b19facc59 1029 0 2025-09-05 23:58:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:648b95987d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 calico-apiserver-648b95987d-pmqf8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie6a72fc5368 [] [] }} ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.316 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.377 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.377 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"calico-apiserver-648b95987d-pmqf8", "timestamp":"2025-09-05 23:58:45.377142584 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.377 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.424 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.424 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.477 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.490 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.505 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.514 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.523 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.525 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.537 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.548 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.560 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.6/26] block=192.168.102.0/26 handle="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.560 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.6/26] handle="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.560 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:45.619089 containerd[1480]: 2025-09-05 23:58:45.560 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.6/26] IPv6=[] ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.567 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"calico-apiserver-648b95987d-pmqf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a72fc5368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.567 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.6/32] ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.567 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6a72fc5368 ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.576 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.581 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b", Pod:"calico-apiserver-648b95987d-pmqf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a72fc5368", MAC:"8e:b6:fc:b7:fd:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:45.623951 containerd[1480]: 2025-09-05 23:58:45.602 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-pmqf8" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:45.665592 containerd[1480]: time="2025-09-05T23:58:45.664591245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:45.665592 containerd[1480]: time="2025-09-05T23:58:45.664651892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:45.665592 containerd[1480]: time="2025-09-05T23:58:45.664663733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:45.665592 containerd[1480]: time="2025-09-05T23:58:45.664753703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:45.700978 systemd[1]: Started cri-containerd-c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b.scope - libcontainer container c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b. Sep 5 23:58:45.740029 containerd[1480]: time="2025-09-05T23:58:45.739954350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-t7j5b,Uid:79a2c532-c00e-4e75-a9f5-90ed8209a139,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a\"" Sep 5 23:58:45.805980 containerd[1480]: time="2025-09-05T23:58:45.805923810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-pmqf8,Uid:cbe0b2e4-eb4a-4cc8-acdc-005b19facc59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\"" Sep 5 23:58:45.907359 containerd[1480]: time="2025-09-05T23:58:45.907027738Z" level=info msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" Sep 5 23:58:45.912684 containerd[1480]: time="2025-09-05T23:58:45.910201251Z" level=info msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" Sep 5 23:58:45.918714 containerd[1480]: time="2025-09-05T23:58:45.917734130Z" level=info msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.067 [INFO][4883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.067 [INFO][4883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" iface="eth0" netns="/var/run/netns/cni-cde109cc-c035-937e-df9b-a361d47d1692" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.067 [INFO][4883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" iface="eth0" netns="/var/run/netns/cni-cde109cc-c035-937e-df9b-a361d47d1692" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.068 [INFO][4883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" iface="eth0" netns="/var/run/netns/cni-cde109cc-c035-937e-df9b-a361d47d1692" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.068 [INFO][4883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.068 [INFO][4883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.140 [INFO][4903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.140 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.140 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.161 [WARNING][4903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.161 [INFO][4903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.165 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:46.178202 containerd[1480]: 2025-09-05 23:58:46.172 [INFO][4883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:58:46.178202 containerd[1480]: time="2025-09-05T23:58:46.178016951Z" level=info msg="TearDown network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" successfully" Sep 5 23:58:46.178202 containerd[1480]: time="2025-09-05T23:58:46.178052955Z" level=info msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" returns successfully" Sep 5 23:58:46.181336 systemd[1]: run-netns-cni\x2dcde109cc\x2dc035\x2d937e\x2ddf9b\x2da361d47d1692.mount: Deactivated successfully. Sep 5 23:58:46.182554 containerd[1480]: time="2025-09-05T23:58:46.181603298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-x5bx7,Uid:d5bacc15-80ca-43c3-bafd-f08e810b113d,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.034 [INFO][4859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.035 [INFO][4859] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" iface="eth0" netns="/var/run/netns/cni-4b4862f2-1e92-bd64-06ad-8e4b5a672e95" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.036 [INFO][4859] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" iface="eth0" netns="/var/run/netns/cni-4b4862f2-1e92-bd64-06ad-8e4b5a672e95" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.041 [INFO][4859] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" iface="eth0" netns="/var/run/netns/cni-4b4862f2-1e92-bd64-06ad-8e4b5a672e95" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.041 [INFO][4859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.041 [INFO][4859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.159 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.161 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.165 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.206 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.206 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.215 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:46.228870 containerd[1480]: 2025-09-05 23:58:46.219 [INFO][4859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:46.232759 containerd[1480]: time="2025-09-05T23:58:46.230739834Z" level=info msg="TearDown network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" successfully" Sep 5 23:58:46.232759 containerd[1480]: time="2025-09-05T23:58:46.230782358Z" level=info msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" returns successfully" Sep 5 23:58:46.234267 systemd[1]: run-netns-cni\x2d4b4862f2\x2d1e92\x2dbd64\x2d06ad\x2d8e4b5a672e95.mount: Deactivated successfully. Sep 5 23:58:46.236272 containerd[1480]: time="2025-09-05T23:58:46.236205743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kgz7b,Uid:fc1e0fa4-e565-4b6d-a320-3ab660954c63,Namespace:kube-system,Attempt:1,}" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.076 [INFO][4881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.077 [INFO][4881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" iface="eth0" netns="/var/run/netns/cni-81ad9c8b-87d7-5bd9-824c-77828d48c741" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.082 [INFO][4881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" iface="eth0" netns="/var/run/netns/cni-81ad9c8b-87d7-5bd9-824c-77828d48c741" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.082 [INFO][4881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" iface="eth0" netns="/var/run/netns/cni-81ad9c8b-87d7-5bd9-824c-77828d48c741" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.082 [INFO][4881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.082 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.195 [INFO][4908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.195 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.215 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.246 [WARNING][4908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.246 [INFO][4908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.251 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:46.267524 containerd[1480]: 2025-09-05 23:58:46.257 [INFO][4881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:46.270035 containerd[1480]: time="2025-09-05T23:58:46.269494971Z" level=info msg="TearDown network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" successfully" Sep 5 23:58:46.270035 containerd[1480]: time="2025-09-05T23:58:46.269549057Z" level=info msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" returns successfully" Sep 5 23:58:46.271382 containerd[1480]: time="2025-09-05T23:58:46.271344610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-gzvsl,Uid:84038953-2a1d-453b-a995-2e6cd5cc7120,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:58:46.544370 systemd-networkd[1376]: cali479b03d2385: Gained IPv6LL Sep 5 23:58:46.546284 systemd-networkd[1376]: cali6a5bd28214c: Link UP Sep 5 23:58:46.548754 systemd-networkd[1376]: cali6a5bd28214c: Gained carrier Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.344 [INFO][4930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0 coredns-674b8bbfcf- kube-system fc1e0fa4-e565-4b6d-a320-3ab660954c63 1050 0 2025-09-05 23:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 coredns-674b8bbfcf-kgz7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a5bd28214c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.346 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.404 [INFO][4957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" HandleID="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.405 [INFO][4957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" HandleID="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"coredns-674b8bbfcf-kgz7b", "timestamp":"2025-09-05 23:58:46.404727667 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.405 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.405 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.406 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.440 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.452 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.465 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.472 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.484 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.484 [INFO][4957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.491 [INFO][4957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.509 [INFO][4957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.7/26] block=192.168.102.0/26 handle="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.7/26] handle="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:46.583574 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.7/26] IPv6=[] ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" HandleID="k8s-pod-network.9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.533 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc1e0fa4-e565-4b6d-a320-3ab660954c63", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"coredns-674b8bbfcf-kgz7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5bd28214c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.534 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.7/32] ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.534 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a5bd28214c ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.550 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.553 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc1e0fa4-e565-4b6d-a320-3ab660954c63", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf", Pod:"coredns-674b8bbfcf-kgz7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5bd28214c", MAC:"ee:d4:26:18:0f:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:46.584193 containerd[1480]: 2025-09-05 23:58:46.577 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-kgz7b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:46.617714 sshd[4788]: Invalid user from 43.134.41.225 port 56610 Sep 5 23:58:46.645493 containerd[1480]: time="2025-09-05T23:58:46.643290500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:46.645493 containerd[1480]: time="2025-09-05T23:58:46.643367508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:46.645493 containerd[1480]: time="2025-09-05T23:58:46.643378349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:46.645493 containerd[1480]: time="2025-09-05T23:58:46.643492921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:46.673725 systemd[1]: Started cri-containerd-9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf.scope - libcontainer container 9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf. Sep 5 23:58:46.728947 systemd-networkd[1376]: calid2a7ce0290a: Link UP Sep 5 23:58:46.732896 systemd-networkd[1376]: calid2a7ce0290a: Gained carrier Sep 5 23:58:46.781132 containerd[1480]: time="2025-09-05T23:58:46.780946376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kgz7b,Uid:fc1e0fa4-e565-4b6d-a320-3ab660954c63,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf\"" Sep 5 23:58:46.794194 containerd[1480]: time="2025-09-05T23:58:46.793837486Z" level=info msg="CreateContainer within sandbox \"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.339 [INFO][4920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0 calico-apiserver-648b95987d- calico-apiserver d5bacc15-80ca-43c3-bafd-f08e810b113d 1051 0 2025-09-05 23:58:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:648b95987d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 calico-apiserver-648b95987d-x5bx7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2a7ce0290a [] [] }} ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.341 [INFO][4920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.428 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.428 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"calico-apiserver-648b95987d-x5bx7", "timestamp":"2025-09-05 23:58:46.42823252 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.428 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.526 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.548 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.579 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.606 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.618 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.631 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.631 [INFO][4955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.637 [INFO][4955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.659 [INFO][4955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.694 [INFO][4955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.8/26] block=192.168.102.0/26 handle="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.694 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.8/26] handle="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.694 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:46.803902 containerd[1480]: 2025-09-05 23:58:46.694 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.8/26] IPv6=[] ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.705 [INFO][4920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5bacc15-80ca-43c3-bafd-f08e810b113d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"calico-apiserver-648b95987d-x5bx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2a7ce0290a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.705 [INFO][4920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.8/32] ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.706 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2a7ce0290a ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.735 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.736 [INFO][4920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5bacc15-80ca-43c3-bafd-f08e810b113d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f", Pod:"calico-apiserver-648b95987d-x5bx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2a7ce0290a", MAC:"a6:a7:92:82:fb:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:46.804794 containerd[1480]: 2025-09-05 23:58:46.793 [INFO][4920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Namespace="calico-apiserver" Pod="calico-apiserver-648b95987d-x5bx7" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:58:46.843166 containerd[1480]: time="2025-09-05T23:58:46.842055603Z" level=info msg="CreateContainer within sandbox \"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9af717bf07d2430adce362c75245154b21543967df8be2b8b75c69750b1aa22\"" Sep 5 23:58:46.846303 containerd[1480]: time="2025-09-05T23:58:46.845996388Z" level=info msg="StartContainer for \"f9af717bf07d2430adce362c75245154b21543967df8be2b8b75c69750b1aa22\"" Sep 5 23:58:46.867914 containerd[1480]: time="2025-09-05T23:58:46.866747504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:46.871433 containerd[1480]: time="2025-09-05T23:58:46.870342492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:46.873381 containerd[1480]: time="2025-09-05T23:58:46.873020300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:46.873381 containerd[1480]: time="2025-09-05T23:58:46.873175917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:46.929681 systemd[1]: Started cri-containerd-961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f.scope - libcontainer container 961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f. Sep 5 23:58:46.966745 systemd[1]: Started cri-containerd-f9af717bf07d2430adce362c75245154b21543967df8be2b8b75c69750b1aa22.scope - libcontainer container f9af717bf07d2430adce362c75245154b21543967df8be2b8b75c69750b1aa22. Sep 5 23:58:46.969018 systemd-networkd[1376]: cali4cc6558fbf9: Link UP Sep 5 23:58:46.970355 systemd-networkd[1376]: cali4cc6558fbf9: Gained carrier Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.435 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0 calico-apiserver-685cf96569- calico-apiserver 84038953-2a1d-453b-a995-2e6cd5cc7120 1052 0 2025-09-05 23:58:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:685cf96569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 calico-apiserver-685cf96569-gzvsl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4cc6558fbf9 [] [] }} ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.436 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.507 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" HandleID="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.507 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" HandleID="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"calico-apiserver-685cf96569-gzvsl", "timestamp":"2025-09-05 23:58:46.507475301 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.507 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.694 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.695 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.752 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.777 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.810 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.833 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.857 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.857 [INFO][4972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.861 [INFO][4972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75 Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.873 [INFO][4972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.905 [INFO][4972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.9/26] block=192.168.102.0/26 handle="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.906 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.9/26] handle="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.906 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:47.002316 containerd[1480]: 2025-09-05 23:58:46.906 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.9/26] IPv6=[] ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" HandleID="k8s-pod-network.24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.920 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"84038953-2a1d-453b-a995-2e6cd5cc7120", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"calico-apiserver-685cf96569-gzvsl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cc6558fbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.921 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.9/32] ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.921 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cc6558fbf9 ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.969 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.974 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"84038953-2a1d-453b-a995-2e6cd5cc7120", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75", Pod:"calico-apiserver-685cf96569-gzvsl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cc6558fbf9", MAC:"a6:b9:79:3f:f7:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:47.004395 containerd[1480]: 2025-09-05 23:58:46.997 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-gzvsl" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:47.031652 containerd[1480]: time="2025-09-05T23:58:47.030883933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:47.039837 containerd[1480]: time="2025-09-05T23:58:47.039619485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:58:47.041825 containerd[1480]: time="2025-09-05T23:58:47.041054795Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:47.046621 containerd[1480]: time="2025-09-05T23:58:47.046557729Z" level=info msg="StartContainer for \"f9af717bf07d2430adce362c75245154b21543967df8be2b8b75c69750b1aa22\" returns successfully" Sep 5 23:58:47.052783 containerd[1480]: time="2025-09-05T23:58:47.052656526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:47.055330 containerd[1480]: time="2025-09-05T23:58:47.055015773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 3.524530219s" Sep 5 23:58:47.055781 containerd[1480]: time="2025-09-05T23:58:47.055663400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:58:47.057537 containerd[1480]: time="2025-09-05T23:58:47.057362898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:58:47.057537 containerd[1480]: time="2025-09-05T23:58:47.057444586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:58:47.057537 containerd[1480]: time="2025-09-05T23:58:47.057457428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:47.058801 containerd[1480]: time="2025-09-05T23:58:47.057545037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:58:47.060535 containerd[1480]: time="2025-09-05T23:58:47.059951248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:58:47.069601 containerd[1480]: time="2025-09-05T23:58:47.068694001Z" level=info msg="CreateContainer within sandbox \"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:58:47.092312 systemd[1]: Started cri-containerd-24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75.scope - libcontainer container 24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75. Sep 5 23:58:47.125728 systemd[1]: run-netns-cni\x2d81ad9c8b\x2d87d7\x2d5bd9\x2d824c\x2d77828d48c741.mount: Deactivated successfully. Sep 5 23:58:47.161116 containerd[1480]: time="2025-09-05T23:58:47.161066646Z" level=info msg="CreateContainer within sandbox \"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b112d03c23c0b1f05cdebbf5e0e2802c78f5175c9b057d4c3d180f377b681ad\"" Sep 5 23:58:47.165499 containerd[1480]: time="2025-09-05T23:58:47.162723979Z" level=info msg="StartContainer for \"3b112d03c23c0b1f05cdebbf5e0e2802c78f5175c9b057d4c3d180f377b681ad\"" Sep 5 23:58:47.225926 systemd[1]: Started cri-containerd-3b112d03c23c0b1f05cdebbf5e0e2802c78f5175c9b057d4c3d180f377b681ad.scope - libcontainer container 3b112d03c23c0b1f05cdebbf5e0e2802c78f5175c9b057d4c3d180f377b681ad. Sep 5 23:58:47.247818 systemd-networkd[1376]: calie6a72fc5368: Gained IPv6LL Sep 5 23:58:47.313651 containerd[1480]: time="2025-09-05T23:58:47.313582571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648b95987d-x5bx7,Uid:d5bacc15-80ca-43c3-bafd-f08e810b113d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\"" Sep 5 23:58:47.328608 kubelet[2614]: I0905 23:58:47.328142 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kgz7b" podStartSLOduration=44.328121689 podStartE2EDuration="44.328121689s" podCreationTimestamp="2025-09-05 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:58:47.326738225 +0000 UTC m=+49.565890614" watchObservedRunningTime="2025-09-05 23:58:47.328121689 +0000 UTC m=+49.567274078" Sep 5 23:58:47.334882 containerd[1480]: time="2025-09-05T23:58:47.334816308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-gzvsl,Uid:84038953-2a1d-453b-a995-2e6cd5cc7120,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75\"" Sep 5 23:58:47.390316 containerd[1480]: time="2025-09-05T23:58:47.390124323Z" level=info msg="StartContainer for \"3b112d03c23c0b1f05cdebbf5e0e2802c78f5175c9b057d4c3d180f377b681ad\" returns successfully" Sep 5 23:58:47.693829 systemd-networkd[1376]: cali6a5bd28214c: Gained IPv6LL Sep 5 23:58:48.077359 systemd-networkd[1376]: calid2a7ce0290a: Gained IPv6LL Sep 5 23:58:48.717790 systemd-networkd[1376]: cali4cc6558fbf9: Gained IPv6LL Sep 5 23:58:50.438367 containerd[1480]: time="2025-09-05T23:58:50.438051911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:50.441798 containerd[1480]: time="2025-09-05T23:58:50.441197129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:58:50.443472 containerd[1480]: time="2025-09-05T23:58:50.442966777Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:50.446283 containerd[1480]: time="2025-09-05T23:58:50.445872333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:50.447085 containerd[1480]: time="2025-09-05T23:58:50.446879669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.386875095s" Sep 5 23:58:50.447085 containerd[1480]: time="2025-09-05T23:58:50.446920593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:58:50.451848 containerd[1480]: time="2025-09-05T23:58:50.451713688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:58:50.469395 containerd[1480]: time="2025-09-05T23:58:50.469349922Z" level=info msg="CreateContainer within sandbox \"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:58:50.484330 containerd[1480]: time="2025-09-05T23:58:50.483997673Z" level=info msg="CreateContainer within sandbox \"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71\"" Sep 5 23:58:50.486177 containerd[1480]: time="2025-09-05T23:58:50.485042292Z" level=info msg="StartContainer for \"532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71\"" Sep 5 23:58:50.523767 systemd[1]: Started cri-containerd-532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71.scope - libcontainer container 532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71. Sep 5 23:58:50.584565 containerd[1480]: time="2025-09-05T23:58:50.584382122Z" level=info msg="StartContainer for \"532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71\" returns successfully" Sep 5 23:58:51.364055 kubelet[2614]: I0905 23:58:51.363955 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cb67bb5b6-b9xqc" podStartSLOduration=25.382605549 podStartE2EDuration="32.363937173s" podCreationTimestamp="2025-09-05 23:58:19 +0000 UTC" firstStartedPulling="2025-09-05 23:58:43.468489284 +0000 UTC m=+45.707641673" lastFinishedPulling="2025-09-05 23:58:50.449820908 +0000 UTC m=+52.688973297" observedRunningTime="2025-09-05 23:58:51.363614543 +0000 UTC m=+53.602766932" watchObservedRunningTime="2025-09-05 23:58:51.363937173 +0000 UTC m=+53.603089562" Sep 5 23:58:53.163907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827585987.mount: Deactivated successfully. Sep 5 23:58:53.592875 sshd[4788]: Connection closed by invalid user 43.134.41.225 port 56610 [preauth] Sep 5 23:58:53.597415 systemd[1]: sshd@7-138.199.175.7:22-43.134.41.225:56610.service: Deactivated successfully. Sep 5 23:58:53.831965 containerd[1480]: time="2025-09-05T23:58:53.830935331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:53.833445 containerd[1480]: time="2025-09-05T23:58:53.832470343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:58:53.834891 containerd[1480]: time="2025-09-05T23:58:53.834811425Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:53.840624 containerd[1480]: time="2025-09-05T23:58:53.840572202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:53.842707 containerd[1480]: time="2025-09-05T23:58:53.841912078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.389976289s" Sep 5 23:58:53.842707 containerd[1480]: time="2025-09-05T23:58:53.841959522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:58:53.846696 containerd[1480]: time="2025-09-05T23:58:53.845872580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:58:53.882735 containerd[1480]: time="2025-09-05T23:58:53.882693078Z" level=info msg="CreateContainer within sandbox \"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:58:53.905919 containerd[1480]: time="2025-09-05T23:58:53.905726346Z" level=info msg="CreateContainer within sandbox \"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58\"" Sep 5 23:58:53.914121 containerd[1480]: time="2025-09-05T23:58:53.911200138Z" level=info msg="StartContainer for \"0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58\"" Sep 5 23:58:54.013452 systemd[1]: Started cri-containerd-0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58.scope - libcontainer container 0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58. Sep 5 23:58:54.085130 containerd[1480]: time="2025-09-05T23:58:54.084690125Z" level=info msg="StartContainer for \"0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58\" returns successfully" Sep 5 23:58:56.387161 systemd[1]: run-containerd-runc-k8s.io-0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58-runc.Bf4cw3.mount: Deactivated successfully. Sep 5 23:58:57.603411 containerd[1480]: time="2025-09-05T23:58:57.603343844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:57.605764 containerd[1480]: time="2025-09-05T23:58:57.605711304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:58:57.606595 containerd[1480]: time="2025-09-05T23:58:57.606533007Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:57.611177 containerd[1480]: time="2025-09-05T23:58:57.611102434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:57.614489 containerd[1480]: time="2025-09-05T23:58:57.611674238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.765756774s" Sep 5 23:58:57.614489 containerd[1480]: time="2025-09-05T23:58:57.611708080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:58:57.615788 containerd[1480]: time="2025-09-05T23:58:57.615749387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:58:57.620851 containerd[1480]: time="2025-09-05T23:58:57.620810732Z" level=info msg="CreateContainer within sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:58:57.651298 containerd[1480]: time="2025-09-05T23:58:57.651029229Z" level=info msg="CreateContainer within sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\"" Sep 5 23:58:57.651733 containerd[1480]: time="2025-09-05T23:58:57.651710281Z" level=info msg="StartContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\"" Sep 5 23:58:57.714787 systemd[1]: Started cri-containerd-512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88.scope - libcontainer container 512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88. Sep 5 23:58:57.814066 containerd[1480]: time="2025-09-05T23:58:57.814010299Z" level=info msg="StartContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" returns successfully" Sep 5 23:58:57.909540 containerd[1480]: time="2025-09-05T23:58:57.909221337Z" level=info msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" Sep 5 23:58:58.019880 containerd[1480]: time="2025-09-05T23:58:58.019825500Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:58.027452 containerd[1480]: time="2025-09-05T23:58:58.023950163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:58:58.028463 containerd[1480]: time="2025-09-05T23:58:58.028399531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 412.134704ms" Sep 5 23:58:58.028463 containerd[1480]: time="2025-09-05T23:58:58.028463176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:58:58.042632 containerd[1480]: time="2025-09-05T23:58:58.041297721Z" level=info msg="CreateContainer within sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:58:58.048194 containerd[1480]: time="2025-09-05T23:58:58.048038978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:57.970 [WARNING][5467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc1e0fa4-e565-4b6d-a320-3ab660954c63", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf", Pod:"coredns-674b8bbfcf-kgz7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5bd28214c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:57.971 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:57.971 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" iface="eth0" netns="" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:57.971 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:57.971 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.012 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.016 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.016 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.047 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.047 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.055 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.062562 containerd[1480]: 2025-09-05 23:58:58.059 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.065215 containerd[1480]: time="2025-09-05T23:58:58.062606530Z" level=info msg="TearDown network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" successfully" Sep 5 23:58:58.065215 containerd[1480]: time="2025-09-05T23:58:58.062637773Z" level=info msg="StopPodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" returns successfully" Sep 5 23:58:58.068127 containerd[1480]: time="2025-09-05T23:58:58.067902400Z" level=info msg="RemovePodSandbox for \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" Sep 5 23:58:58.068127 containerd[1480]: time="2025-09-05T23:58:58.067990207Z" level=info msg="Forcibly stopping sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\"" Sep 5 23:58:58.080888 containerd[1480]: time="2025-09-05T23:58:58.080836273Z" level=info msg="CreateContainer within sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\"" Sep 5 23:58:58.083090 containerd[1480]: time="2025-09-05T23:58:58.083050436Z" level=info msg="StartContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\"" Sep 5 23:58:58.136659 systemd[1]: Started cri-containerd-2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384.scope - libcontainer container 2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384. Sep 5 23:58:58.232567 containerd[1480]: time="2025-09-05T23:58:58.230655347Z" level=info msg="StartContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" returns successfully" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.151 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc1e0fa4-e565-4b6d-a320-3ab660954c63", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"9d119c8da8ecd92a398ea281a4001101b4b583cbc8d0d3785f78fc49a92b07cf", Pod:"coredns-674b8bbfcf-kgz7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5bd28214c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.152 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.152 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" iface="eth0" netns="" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.152 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.152 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.192 [INFO][5526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.194 [INFO][5526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.194 [INFO][5526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.213 [WARNING][5526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.213 [INFO][5526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" HandleID="k8s-pod-network.7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--kgz7b-eth0" Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.219 [INFO][5526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.234800 containerd[1480]: 2025-09-05 23:58:58.224 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76" Sep 5 23:58:58.235262 containerd[1480]: time="2025-09-05T23:58:58.234835974Z" level=info msg="TearDown network for sandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" successfully" Sep 5 23:58:58.241940 containerd[1480]: time="2025-09-05T23:58:58.241747323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:58.241940 containerd[1480]: time="2025-09-05T23:58:58.241893494Z" level=info msg="RemovePodSandbox \"7eb339b8ff3f56bc43319c2de481126926f30465c3a8170a7fc40cb0bd9d9d76\" returns successfully" Sep 5 23:58:58.243260 containerd[1480]: time="2025-09-05T23:58:58.242742077Z" level=info msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" Sep 5 23:58:58.398455 kubelet[2614]: I0905 23:58:58.396543 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-t7j5b" podStartSLOduration=31.29297179 podStartE2EDuration="39.396524042s" podCreationTimestamp="2025-09-05 23:58:19 +0000 UTC" firstStartedPulling="2025-09-05 23:58:45.742103989 +0000 UTC m=+47.981256338" lastFinishedPulling="2025-09-05 23:58:53.845656201 +0000 UTC m=+56.084808590" observedRunningTime="2025-09-05 23:58:54.391706716 +0000 UTC m=+56.630859105" watchObservedRunningTime="2025-09-05 23:58:58.396524042 +0000 UTC m=+60.635676431" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.335 [WARNING][5551] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b841a2a5-b2e2-4dd3-a133-b08f780b324f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e", Pod:"csi-node-driver-7tcpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44819cff664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.335 [INFO][5551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.335 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" iface="eth0" netns="" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.335 [INFO][5551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.335 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.369 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.369 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.369 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.395 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.395 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.402 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.406627 containerd[1480]: 2025-09-05 23:58:58.404 [INFO][5551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.407936 containerd[1480]: time="2025-09-05T23:58:58.406674550Z" level=info msg="TearDown network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" successfully" Sep 5 23:58:58.407936 containerd[1480]: time="2025-09-05T23:58:58.406700752Z" level=info msg="StopPodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" returns successfully" Sep 5 23:58:58.407936 containerd[1480]: time="2025-09-05T23:58:58.407240311Z" level=info msg="RemovePodSandbox for \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" Sep 5 23:58:58.407936 containerd[1480]: time="2025-09-05T23:58:58.407273554Z" level=info msg="Forcibly stopping sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\"" Sep 5 23:58:58.457102 kubelet[2614]: I0905 23:58:58.456514 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-648b95987d-pmqf8" podStartSLOduration=32.650686715 podStartE2EDuration="44.456491699s" podCreationTimestamp="2025-09-05 23:58:14 +0000 UTC" firstStartedPulling="2025-09-05 23:58:45.809546573 +0000 UTC m=+48.048698962" lastFinishedPulling="2025-09-05 23:58:57.615351477 +0000 UTC m=+59.854503946" observedRunningTime="2025-09-05 23:58:58.399741799 +0000 UTC m=+60.638894188" watchObservedRunningTime="2025-09-05 23:58:58.456491699 +0000 UTC m=+60.695644088" Sep 5 23:58:58.466321 containerd[1480]: time="2025-09-05T23:58:58.466253578Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:58:58.468871 containerd[1480]: time="2025-09-05T23:58:58.467754408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:58:58.470747 containerd[1480]: time="2025-09-05T23:58:58.470413564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 421.843787ms" Sep 5 23:58:58.470934 containerd[1480]: time="2025-09-05T23:58:58.470897240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:58:58.474825 containerd[1480]: time="2025-09-05T23:58:58.474747723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:58:58.480143 containerd[1480]: time="2025-09-05T23:58:58.479926024Z" level=info msg="CreateContainer within sandbox \"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:58:58.501539 containerd[1480]: time="2025-09-05T23:58:58.501378844Z" level=info msg="CreateContainer within sandbox \"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85b9d8b30bd51f6221666862409cd6cfaac49b6e0dcc7ef27ffb5d5132123330\"" Sep 5 23:58:58.504031 containerd[1480]: time="2025-09-05T23:58:58.503991797Z" level=info msg="StartContainer for \"85b9d8b30bd51f6221666862409cd6cfaac49b6e0dcc7ef27ffb5d5132123330\"" Sep 5 23:58:58.582676 systemd[1]: Started cri-containerd-85b9d8b30bd51f6221666862409cd6cfaac49b6e0dcc7ef27ffb5d5132123330.scope - libcontainer container 85b9d8b30bd51f6221666862409cd6cfaac49b6e0dcc7ef27ffb5d5132123330. Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.552 [WARNING][5576] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b841a2a5-b2e2-4dd3-a133-b08f780b324f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e", Pod:"csi-node-driver-7tcpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44819cff664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.552 [INFO][5576] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.552 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" iface="eth0" netns="" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.553 [INFO][5576] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.553 [INFO][5576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.585 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.586 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.586 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.612 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.612 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" HandleID="k8s-pod-network.52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Workload="ci--4081--3--5--n--4ef3874a70-k8s-csi--node--driver--7tcpm-eth0" Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.617 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.623358 containerd[1480]: 2025-09-05 23:58:58.619 [INFO][5576] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c" Sep 5 23:58:58.623358 containerd[1480]: time="2025-09-05T23:58:58.621255033Z" level=info msg="TearDown network for sandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" successfully" Sep 5 23:58:58.628571 containerd[1480]: time="2025-09-05T23:58:58.628521768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:58.628781 containerd[1480]: time="2025-09-05T23:58:58.628762946Z" level=info msg="RemovePodSandbox \"52f5ee47dc7705f155d2c3f2f81aae457599c93b1d2d6ffc843090d1c8a7f76c\" returns successfully" Sep 5 23:58:58.629336 containerd[1480]: time="2025-09-05T23:58:58.629298865Z" level=info msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" Sep 5 23:58:58.752411 systemd[1]: run-containerd-runc-k8s.io-0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58-runc.YUHw3r.mount: Deactivated successfully. Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.697 [WARNING][5620] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.697 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.697 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" iface="eth0" netns="" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.697 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.697 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.734 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.735 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.735 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.769 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.770 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.772 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.775144 containerd[1480]: 2025-09-05 23:58:58.773 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.775717 containerd[1480]: time="2025-09-05T23:58:58.775197970Z" level=info msg="TearDown network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" successfully" Sep 5 23:58:58.775717 containerd[1480]: time="2025-09-05T23:58:58.775225812Z" level=info msg="StopPodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" returns successfully" Sep 5 23:58:58.776828 containerd[1480]: time="2025-09-05T23:58:58.776650357Z" level=info msg="RemovePodSandbox for \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" Sep 5 23:58:58.776828 containerd[1480]: time="2025-09-05T23:58:58.776698721Z" level=info msg="Forcibly stopping sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\"" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.851 [WARNING][5664] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.851 [INFO][5664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.851 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" iface="eth0" netns="" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.851 [INFO][5664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.851 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.919 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.920 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.920 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.933 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.933 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" HandleID="k8s-pod-network.e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-whisker--66b8b66984--qdr7l-eth0" Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.935 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:58.938772 containerd[1480]: 2025-09-05 23:58:58.937 [INFO][5664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3" Sep 5 23:58:58.939314 containerd[1480]: time="2025-09-05T23:58:58.938964831Z" level=info msg="TearDown network for sandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" successfully" Sep 5 23:58:58.945685 containerd[1480]: time="2025-09-05T23:58:58.945632882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:58.945791 containerd[1480]: time="2025-09-05T23:58:58.945733290Z" level=info msg="RemovePodSandbox \"e931fb016da03ebdf7cf5f74994396ba6777d6e634ab2b905fe216c3741926a3\" returns successfully" Sep 5 23:58:58.946393 containerd[1480]: time="2025-09-05T23:58:58.946362616Z" level=info msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" Sep 5 23:58:59.013059 containerd[1480]: time="2025-09-05T23:58:59.011850373Z" level=info msg="StartContainer for \"85b9d8b30bd51f6221666862409cd6cfaac49b6e0dcc7ef27ffb5d5132123330\" returns successfully" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.041 [WARNING][5695] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0", GenerateName:"calico-kube-controllers-7cb67bb5b6-", Namespace:"calico-system", SelfLink:"", UID:"62b3a3df-9267-4dc2-9423-4fc82ad97b42", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb67bb5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469", Pod:"calico-kube-controllers-7cb67bb5b6-b9xqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2677de3acf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.043 [INFO][5695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.043 [INFO][5695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" iface="eth0" netns="" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.043 [INFO][5695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.043 [INFO][5695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.090 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.090 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.090 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.102 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.102 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.104 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.109760 containerd[1480]: 2025-09-05 23:58:59.107 [INFO][5695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.110189 containerd[1480]: time="2025-09-05T23:58:59.109803481Z" level=info msg="TearDown network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" successfully" Sep 5 23:58:59.110189 containerd[1480]: time="2025-09-05T23:58:59.109829963Z" level=info msg="StopPodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" returns successfully" Sep 5 23:58:59.111140 containerd[1480]: time="2025-09-05T23:58:59.111098374Z" level=info msg="RemovePodSandbox for \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" Sep 5 23:58:59.111282 containerd[1480]: time="2025-09-05T23:58:59.111141537Z" level=info msg="Forcibly stopping sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\"" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.181 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0", GenerateName:"calico-kube-controllers-7cb67bb5b6-", Namespace:"calico-system", SelfLink:"", UID:"62b3a3df-9267-4dc2-9423-4fc82ad97b42", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb67bb5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"574b1d521238522deb0a4faa5b832d6b0b34639832109db4c283a88fdf169469", Pod:"calico-kube-controllers-7cb67bb5b6-b9xqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2677de3acf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.182 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.182 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" iface="eth0" netns="" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.182 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.182 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.220 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.220 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.220 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.233 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.233 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" HandleID="k8s-pod-network.b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--kube--controllers--7cb67bb5b6--b9xqc-eth0" Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.237 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.244463 containerd[1480]: 2025-09-05 23:58:59.241 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555" Sep 5 23:58:59.244886 containerd[1480]: time="2025-09-05T23:58:59.244467089Z" level=info msg="TearDown network for sandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" successfully" Sep 5 23:58:59.250949 containerd[1480]: time="2025-09-05T23:58:59.250898628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:59.251091 containerd[1480]: time="2025-09-05T23:58:59.250984834Z" level=info msg="RemovePodSandbox \"b34e0ac0bbc3be03e4480e47ef4d17c279ebccb13ac3ed7dc48f6a33b7578555\" returns successfully" Sep 5 23:58:59.252239 containerd[1480]: time="2025-09-05T23:58:59.252177839Z" level=info msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" Sep 5 23:58:59.402900 kubelet[2614]: I0905 23:58:59.402850 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.328 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b", Pod:"calico-apiserver-648b95987d-pmqf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a72fc5368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.329 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.329 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" iface="eth0" netns="" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.329 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.329 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.392 [INFO][5755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.392 [INFO][5755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.393 [INFO][5755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.406 [WARNING][5755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.406 [INFO][5755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.410 [INFO][5755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.416853 containerd[1480]: 2025-09-05 23:58:59.414 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.417391 containerd[1480]: time="2025-09-05T23:58:59.416911032Z" level=info msg="TearDown network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" successfully" Sep 5 23:58:59.417391 containerd[1480]: time="2025-09-05T23:58:59.416951035Z" level=info msg="StopPodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" returns successfully" Sep 5 23:58:59.419650 containerd[1480]: time="2025-09-05T23:58:59.418609433Z" level=info msg="RemovePodSandbox for \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" Sep 5 23:58:59.419650 containerd[1480]: time="2025-09-05T23:58:59.418651596Z" level=info msg="Forcibly stopping sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\"" Sep 5 23:58:59.425888 kubelet[2614]: I0905 23:58:59.425121 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-648b95987d-x5bx7" podStartSLOduration=34.710535332 podStartE2EDuration="45.425097656s" podCreationTimestamp="2025-09-05 23:58:14 +0000 UTC" firstStartedPulling="2025-09-05 23:58:47.316744141 +0000 UTC m=+49.555896530" lastFinishedPulling="2025-09-05 23:58:58.031306465 +0000 UTC m=+60.270458854" observedRunningTime="2025-09-05 23:58:58.461287812 +0000 UTC m=+60.700440241" watchObservedRunningTime="2025-09-05 23:58:59.425097656 +0000 UTC m=+61.664250045" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.472 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b", Pod:"calico-apiserver-648b95987d-pmqf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a72fc5368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.472 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.472 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" iface="eth0" netns="" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.472 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.473 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.526 [INFO][5780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.526 [INFO][5780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.526 [INFO][5780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.547 [WARNING][5780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.549 [INFO][5780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" HandleID="k8s-pod-network.2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.553 [INFO][5780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.557884 containerd[1480]: 2025-09-05 23:58:59.555 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75" Sep 5 23:58:59.558371 containerd[1480]: time="2025-09-05T23:58:59.557915212Z" level=info msg="TearDown network for sandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" successfully" Sep 5 23:58:59.576578 containerd[1480]: time="2025-09-05T23:58:59.576516299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:59.576727 containerd[1480]: time="2025-09-05T23:58:59.576633228Z" level=info msg="RemovePodSandbox \"2bd77785937725ec1b195ed98468164030e9f3a75941990113c4a61cf3a57f75\" returns successfully" Sep 5 23:58:59.577718 containerd[1480]: time="2025-09-05T23:58:59.577687143Z" level=info msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.625 [WARNING][5794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c543c41c-ee3f-499e-8d6b-b62b005decb4", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7", Pod:"coredns-674b8bbfcf-8kktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f81effc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.625 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.625 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" iface="eth0" netns="" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.625 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.625 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.661 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.661 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.662 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.671 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.671 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.674 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.678645 containerd[1480]: 2025-09-05 23:58:59.676 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.679292 containerd[1480]: time="2025-09-05T23:58:59.678671548Z" level=info msg="TearDown network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" successfully" Sep 5 23:58:59.679292 containerd[1480]: time="2025-09-05T23:58:59.678699230Z" level=info msg="StopPodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" returns successfully" Sep 5 23:58:59.680898 containerd[1480]: time="2025-09-05T23:58:59.679444043Z" level=info msg="RemovePodSandbox for \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" Sep 5 23:58:59.680898 containerd[1480]: time="2025-09-05T23:58:59.679479405Z" level=info msg="Forcibly stopping sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\"" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.750 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c543c41c-ee3f-499e-8d6b-b62b005decb4", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"8a479ee0db07552e2da3c6125ec735e360556d5df1570e48059109726c1699a7", Pod:"coredns-674b8bbfcf-8kktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f81effc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.751 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.751 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" iface="eth0" netns="" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.751 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.751 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.798 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.798 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.798 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.832 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.833 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" HandleID="k8s-pod-network.a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Workload="ci--4081--3--5--n--4ef3874a70-k8s-coredns--674b8bbfcf--8kktj-eth0" Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.836 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.841656 containerd[1480]: 2025-09-05 23:58:59.838 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471" Sep 5 23:58:59.842131 containerd[1480]: time="2025-09-05T23:58:59.841805707Z" level=info msg="TearDown network for sandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" successfully" Sep 5 23:58:59.852741 containerd[1480]: time="2025-09-05T23:58:59.852684323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:58:59.853126 containerd[1480]: time="2025-09-05T23:58:59.853093352Z" level=info msg="RemovePodSandbox \"a135427da31d9a75973a6d1fc6fdcb4acd8f324b4ce07abbbc6a3d7ebda54471\" returns successfully" Sep 5 23:58:59.853620 containerd[1480]: time="2025-09-05T23:58:59.853586507Z" level=info msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.917 [WARNING][5838] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"84038953-2a1d-453b-a995-2e6cd5cc7120", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75", Pod:"calico-apiserver-685cf96569-gzvsl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cc6558fbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.918 [INFO][5838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.918 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" iface="eth0" netns="" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.918 [INFO][5838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.918 [INFO][5838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.951 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.951 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.951 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.967 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.967 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.971 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:58:59.976697 containerd[1480]: 2025-09-05 23:58:59.973 [INFO][5838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:58:59.976697 containerd[1480]: time="2025-09-05T23:58:59.976667488Z" level=info msg="TearDown network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" successfully" Sep 5 23:58:59.976697 containerd[1480]: time="2025-09-05T23:58:59.976694890Z" level=info msg="StopPodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" returns successfully" Sep 5 23:58:59.977642 containerd[1480]: time="2025-09-05T23:58:59.977609596Z" level=info msg="RemovePodSandbox for \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" Sep 5 23:58:59.977703 containerd[1480]: time="2025-09-05T23:58:59.977650079Z" level=info msg="Forcibly stopping sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\"" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.036 [WARNING][5864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"84038953-2a1d-453b-a995-2e6cd5cc7120", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"24576da822eb11ebaed46845c89ad5bbe06589d356e473763474139d694d3b75", Pod:"calico-apiserver-685cf96569-gzvsl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cc6558fbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.037 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.037 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" iface="eth0" netns="" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.037 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.037 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.082 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.084 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.084 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.098 [WARNING][5873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.098 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" HandleID="k8s-pod-network.1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--gzvsl-eth0" Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.103 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:00.110904 containerd[1480]: 2025-09-05 23:59:00.109 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae" Sep 5 23:59:00.111502 containerd[1480]: time="2025-09-05T23:59:00.110933462Z" level=info msg="TearDown network for sandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" successfully" Sep 5 23:59:00.138793 containerd[1480]: time="2025-09-05T23:59:00.137487497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:59:00.139005 containerd[1480]: time="2025-09-05T23:59:00.138976760Z" level=info msg="RemovePodSandbox \"1a19b7b0d3c08e055140a3a27de7cf717f292b4e8a0744548debd8ec132842ae\" returns successfully" Sep 5 23:59:00.140760 containerd[1480]: time="2025-09-05T23:59:00.140725281Z" level=info msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.252 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"79a2c532-c00e-4e75-a9f5-90ed8209a139", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a", Pod:"goldmane-54d579b49d-t7j5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali479b03d2385", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.252 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.252 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" iface="eth0" netns="" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.252 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.252 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.306 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.307 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.307 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.320 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.320 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.323 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:00.333937 containerd[1480]: 2025-09-05 23:59:00.332 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.336438 containerd[1480]: time="2025-09-05T23:59:00.335190322Z" level=info msg="TearDown network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" successfully" Sep 5 23:59:00.336438 containerd[1480]: time="2025-09-05T23:59:00.335223964Z" level=info msg="StopPodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" returns successfully" Sep 5 23:59:00.337874 containerd[1480]: time="2025-09-05T23:59:00.337390794Z" level=info msg="RemovePodSandbox for \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" Sep 5 23:59:00.341761 containerd[1480]: time="2025-09-05T23:59:00.340332597Z" level=info msg="Forcibly stopping sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\"" Sep 5 23:59:00.407662 kubelet[2614]: I0905 23:59:00.407202 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.423 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"79a2c532-c00e-4e75-a9f5-90ed8209a139", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"0f5f0491afcedd06c8746abded201c606f266655df2787b91013533ced004d4a", Pod:"goldmane-54d579b49d-t7j5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali479b03d2385", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.424 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.425 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" iface="eth0" netns="" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.425 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.425 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.484 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.485 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.485 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.500 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.500 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" HandleID="k8s-pod-network.88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Workload="ci--4081--3--5--n--4ef3874a70-k8s-goldmane--54d579b49d--t7j5b-eth0" Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.503 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:00.512848 containerd[1480]: 2025-09-05 23:59:00.505 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727" Sep 5 23:59:00.514073 containerd[1480]: time="2025-09-05T23:59:00.513730702Z" level=info msg="TearDown network for sandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" successfully" Sep 5 23:59:00.521220 containerd[1480]: time="2025-09-05T23:59:00.521161696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:59:00.522807 containerd[1480]: time="2025-09-05T23:59:00.521255942Z" level=info msg="RemovePodSandbox \"88366c654ef50cbe6973d6356e6e3db313626795841d4750a454a7ff35eb3727\" returns successfully" Sep 5 23:59:00.523049 containerd[1480]: time="2025-09-05T23:59:00.523002263Z" level=info msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" Sep 5 23:59:00.636578 containerd[1480]: time="2025-09-05T23:59:00.636444784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:59:00.641779 containerd[1480]: time="2025-09-05T23:59:00.639601522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:59:00.644452 containerd[1480]: time="2025-09-05T23:59:00.643791732Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:59:00.647253 containerd[1480]: time="2025-09-05T23:59:00.647205888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.17239544s" Sep 5 23:59:00.647596 containerd[1480]: time="2025-09-05T23:59:00.647484227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:59:00.647907 containerd[1480]: time="2025-09-05T23:59:00.647821290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:59:00.655299 containerd[1480]: time="2025-09-05T23:59:00.655219602Z" level=info msg="CreateContainer within sandbox \"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:59:00.687845 containerd[1480]: time="2025-09-05T23:59:00.687755130Z" level=info msg="CreateContainer within sandbox \"ed7cd180fbeb0726804018581e60f49fe59970281af35b0cef47cc4b2e68770e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e\"" Sep 5 23:59:00.690725 containerd[1480]: time="2025-09-05T23:59:00.690668292Z" level=info msg="StartContainer for \"e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e\"" Sep 5 23:59:00.759216 systemd[1]: run-containerd-runc-k8s.io-e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e-runc.AOsdUn.mount: Deactivated successfully. Sep 5 23:59:00.772805 systemd[1]: Started cri-containerd-e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e.scope - libcontainer container e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e. Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.681 [WARNING][5936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5bacc15-80ca-43c3-bafd-f08e810b113d", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f", Pod:"calico-apiserver-648b95987d-x5bx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2a7ce0290a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.681 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.682 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" iface="eth0" netns="" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.682 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.682 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.768 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.768 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.768 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.800 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.800 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.803 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:00.808601 containerd[1480]: 2025-09-05 23:59:00.806 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.809051 containerd[1480]: time="2025-09-05T23:59:00.808645166Z" level=info msg="TearDown network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" successfully" Sep 5 23:59:00.809051 containerd[1480]: time="2025-09-05T23:59:00.808671928Z" level=info msg="StopPodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" returns successfully" Sep 5 23:59:00.810887 containerd[1480]: time="2025-09-05T23:59:00.810834117Z" level=info msg="RemovePodSandbox for \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" Sep 5 23:59:00.810887 containerd[1480]: time="2025-09-05T23:59:00.810884641Z" level=info msg="Forcibly stopping sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\"" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.877 [WARNING][5977] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0", GenerateName:"calico-apiserver-648b95987d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5bacc15-80ca-43c3-bafd-f08e810b113d", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648b95987d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f", Pod:"calico-apiserver-648b95987d-x5bx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2a7ce0290a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.878 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.878 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" iface="eth0" netns="" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.878 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.878 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.919 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.919 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.920 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.934 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.934 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" HandleID="k8s-pod-network.2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.936 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:00.941614 containerd[1480]: 2025-09-05 23:59:00.939 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd" Sep 5 23:59:00.941614 containerd[1480]: time="2025-09-05T23:59:00.941584634Z" level=info msg="TearDown network for sandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" successfully" Sep 5 23:59:00.948659 containerd[1480]: time="2025-09-05T23:59:00.948581038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:59:00.948787 containerd[1480]: time="2025-09-05T23:59:00.948723768Z" level=info msg="RemovePodSandbox \"2934954b85fc734ca7635fa9ea4cbc24ca401b60b3efca9007c7c77166b161fd\" returns successfully" Sep 5 23:59:01.091059 containerd[1480]: time="2025-09-05T23:59:01.088995994Z" level=info msg="StartContainer for \"e9ab63f9613907ca428ea94d9c8ec3279352a08ee74b900d788082a7e6e0ae2e\" returns successfully" Sep 5 23:59:01.463174 kubelet[2614]: I0905 23:59:01.461225 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-685cf96569-gzvsl" podStartSLOduration=33.333180228 podStartE2EDuration="44.461206197s" podCreationTimestamp="2025-09-05 23:58:17 +0000 UTC" firstStartedPulling="2025-09-05 23:58:47.34421533 +0000 UTC m=+49.583367719" lastFinishedPulling="2025-09-05 23:58:58.472241259 +0000 UTC m=+60.711393688" observedRunningTime="2025-09-05 23:58:59.428727595 +0000 UTC m=+61.667880024" watchObservedRunningTime="2025-09-05 23:59:01.461206197 +0000 UTC m=+63.700358546" Sep 5 23:59:02.054108 kubelet[2614]: I0905 23:59:02.053962 2614 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:59:02.058457 kubelet[2614]: I0905 23:59:02.058267 2614 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:59:02.636989 kubelet[2614]: I0905 23:59:02.636480 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:59:03.454305 kubelet[2614]: I0905 23:59:03.454224 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7tcpm" podStartSLOduration=26.348540487 podStartE2EDuration="44.454204689s" podCreationTimestamp="2025-09-05 23:58:19 +0000 UTC" firstStartedPulling="2025-09-05 23:58:42.543214441 +0000 UTC m=+44.782366830" lastFinishedPulling="2025-09-05 23:59:00.648878643 +0000 UTC m=+62.888031032" observedRunningTime="2025-09-05 23:59:01.472594599 +0000 UTC m=+63.711746988" watchObservedRunningTime="2025-09-05 23:59:03.454204689 +0000 UTC m=+65.693357118" Sep 5 23:59:04.178773 kubelet[2614]: I0905 23:59:04.178728 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:59:04.180202 containerd[1480]: time="2025-09-05T23:59:04.180163278Z" level=info msg="StopContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" with timeout 30 (s)" Sep 5 23:59:04.181920 containerd[1480]: time="2025-09-05T23:59:04.180626746Z" level=info msg="Stop container \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" with signal terminated" Sep 5 23:59:04.218953 systemd[1]: cri-containerd-512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88.scope: Deactivated successfully. Sep 5 23:59:04.219685 systemd[1]: cri-containerd-512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88.scope: Consumed 1.192s CPU time. Sep 5 23:59:04.246837 systemd[1]: Created slice kubepods-besteffort-pod23aeb662_fa15_40db_b7b6_3c8316b74831.slice - libcontainer container kubepods-besteffort-pod23aeb662_fa15_40db_b7b6_3c8316b74831.slice. Sep 5 23:59:04.270546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88-rootfs.mount: Deactivated successfully. Sep 5 23:59:04.391444 kubelet[2614]: I0905 23:59:04.389924 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23aeb662-fa15-40db-b7b6-3c8316b74831-calico-apiserver-certs\") pod \"calico-apiserver-685cf96569-h8f9r\" (UID: \"23aeb662-fa15-40db-b7b6-3c8316b74831\") " pod="calico-apiserver/calico-apiserver-685cf96569-h8f9r" Sep 5 23:59:04.391444 kubelet[2614]: I0905 23:59:04.389988 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dwp\" (UniqueName: \"kubernetes.io/projected/23aeb662-fa15-40db-b7b6-3c8316b74831-kube-api-access-p5dwp\") pod \"calico-apiserver-685cf96569-h8f9r\" (UID: \"23aeb662-fa15-40db-b7b6-3c8316b74831\") " pod="calico-apiserver/calico-apiserver-685cf96569-h8f9r" Sep 5 23:59:04.420240 containerd[1480]: time="2025-09-05T23:59:04.419759064Z" level=info msg="shim disconnected" id=512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88 namespace=k8s.io Sep 5 23:59:04.420240 containerd[1480]: time="2025-09-05T23:59:04.419942155Z" level=warning msg="cleaning up after shim disconnected" id=512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88 namespace=k8s.io Sep 5 23:59:04.420240 containerd[1480]: time="2025-09-05T23:59:04.419954195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:59:04.452020 containerd[1480]: time="2025-09-05T23:59:04.451818615Z" level=info msg="StopContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" returns successfully" Sep 5 23:59:04.453190 containerd[1480]: time="2025-09-05T23:59:04.453019288Z" level=info msg="StopPodSandbox for \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\"" Sep 5 23:59:04.453190 containerd[1480]: time="2025-09-05T23:59:04.453089133Z" level=info msg="Container to stop \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:59:04.458867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b-shm.mount: Deactivated successfully. Sep 5 23:59:04.467801 systemd[1]: cri-containerd-c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b.scope: Deactivated successfully. Sep 5 23:59:04.526639 containerd[1480]: time="2025-09-05T23:59:04.525009391Z" level=info msg="shim disconnected" id=c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b namespace=k8s.io Sep 5 23:59:04.526639 containerd[1480]: time="2025-09-05T23:59:04.526510482Z" level=warning msg="cleaning up after shim disconnected" id=c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b namespace=k8s.io Sep 5 23:59:04.526639 containerd[1480]: time="2025-09-05T23:59:04.526557685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:59:04.529066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b-rootfs.mount: Deactivated successfully. Sep 5 23:59:04.554644 containerd[1480]: time="2025-09-05T23:59:04.554602513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-h8f9r,Uid:23aeb662-fa15-40db-b7b6-3c8316b74831,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:59:04.653590 systemd-networkd[1376]: calie6a72fc5368: Link DOWN Sep 5 23:59:04.653596 systemd-networkd[1376]: calie6a72fc5368: Lost carrier Sep 5 23:59:04.881702 systemd-networkd[1376]: cali973abd9345b: Link UP Sep 5 23:59:04.882833 systemd-networkd[1376]: cali973abd9345b: Gained carrier Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.633 [INFO][6090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0 calico-apiserver-685cf96569- calico-apiserver 23aeb662-fa15-40db-b7b6-3c8316b74831 1197 0 2025-09-05 23:59:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:685cf96569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-4ef3874a70 calico-apiserver-685cf96569-h8f9r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali973abd9345b [] [] }} ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.633 [INFO][6090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.741 [INFO][6112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" HandleID="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.741 [INFO][6112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" HandleID="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bc00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-4ef3874a70", "pod":"calico-apiserver-685cf96569-h8f9r", "timestamp":"2025-09-05 23:59:04.741592496 +0000 UTC"}, Hostname:"ci-4081-3-5-n-4ef3874a70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.741 [INFO][6112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.741 [INFO][6112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.741 [INFO][6112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-4ef3874a70' Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.762 [INFO][6112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.784 [INFO][6112] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.792 [INFO][6112] ipam/ipam.go 511: Trying affinity for 192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.802 [INFO][6112] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.806 [INFO][6112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.0/26 host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.806 [INFO][6112] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.0/26 handle="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.810 [INFO][6112] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3 Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.832 [INFO][6112] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.0/26 handle="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.868 [INFO][6112] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.10/26] block=192.168.102.0/26 handle="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.869 [INFO][6112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.10/26] handle="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" host="ci-4081-3-5-n-4ef3874a70" Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.869 [INFO][6112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:04.948360 containerd[1480]: 2025-09-05 23:59:04.869 [INFO][6112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.10/26] IPv6=[] ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" HandleID="k8s-pod-network.6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.873 [INFO][6090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"23aeb662-fa15-40db-b7b6-3c8316b74831", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"", Pod:"calico-apiserver-685cf96569-h8f9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali973abd9345b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.873 [INFO][6090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.10/32] ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.873 [INFO][6090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali973abd9345b ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.880 [INFO][6090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.880 [INFO][6090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0", GenerateName:"calico-apiserver-685cf96569-", Namespace:"calico-apiserver", SelfLink:"", UID:"23aeb662-fa15-40db-b7b6-3c8316b74831", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685cf96569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-4ef3874a70", ContainerID:"6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3", Pod:"calico-apiserver-685cf96569-h8f9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali973abd9345b", MAC:"12:ed:7d:12:eb:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:59:04.949231 containerd[1480]: 2025-09-05 23:59:04.945 [INFO][6090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3" Namespace="calico-apiserver" Pod="calico-apiserver-685cf96569-h8f9r" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--685cf96569--h8f9r-eth0" Sep 5 23:59:04.981451 containerd[1480]: time="2025-09-05T23:59:04.980370792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:04.981451 containerd[1480]: time="2025-09-05T23:59:04.980478919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:04.981451 containerd[1480]: time="2025-09-05T23:59:04.980495040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:04.985455 containerd[1480]: time="2025-09-05T23:59:04.981905726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:05.018729 systemd[1]: Started cri-containerd-6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3.scope - libcontainer container 6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3. Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.651 [INFO][6089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.651 [INFO][6089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" iface="eth0" netns="/var/run/netns/cni-a5e1cad0-8fbf-b28a-cd47-514d324e2162" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.652 [INFO][6089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" iface="eth0" netns="/var/run/netns/cni-a5e1cad0-8fbf-b28a-cd47-514d324e2162" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.676 [INFO][6089] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" after=23.9759ms iface="eth0" netns="/var/run/netns/cni-a5e1cad0-8fbf-b28a-cd47-514d324e2162" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.676 [INFO][6089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.676 [INFO][6089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.739 [INFO][6119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.742 [INFO][6119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:04.870 [INFO][6119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:05.054 [INFO][6119] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:05.054 [INFO][6119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:05.057 [INFO][6119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:05.063656 containerd[1480]: 2025-09-05 23:59:05.060 [INFO][6089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 5 23:59:05.064721 containerd[1480]: time="2025-09-05T23:59:05.063896198Z" level=info msg="TearDown network for sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" successfully" Sep 5 23:59:05.064721 containerd[1480]: time="2025-09-05T23:59:05.063934761Z" level=info msg="StopPodSandbox for \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" returns successfully" Sep 5 23:59:05.122659 containerd[1480]: time="2025-09-05T23:59:05.122610981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685cf96569-h8f9r,Uid:23aeb662-fa15-40db-b7b6-3c8316b74831,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3\"" Sep 5 23:59:05.130538 containerd[1480]: time="2025-09-05T23:59:05.130466525Z" level=info msg="CreateContainer within sandbox \"6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:59:05.147151 containerd[1480]: time="2025-09-05T23:59:05.145317320Z" level=info msg="CreateContainer within sandbox \"6ed7dc21ac19b2d39723b449ee0dae22e21f1743c97016aa92d88a293e317fe3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"89804f7eb8386dd4f82db9c95b93fa76dd82cd1263e728e201d0d5e7acf4035b\"" Sep 5 23:59:05.147317 containerd[1480]: time="2025-09-05T23:59:05.147219753Z" level=info msg="StartContainer for \"89804f7eb8386dd4f82db9c95b93fa76dd82cd1263e728e201d0d5e7acf4035b\"" Sep 5 23:59:05.182766 systemd[1]: Started cri-containerd-89804f7eb8386dd4f82db9c95b93fa76dd82cd1263e728e201d0d5e7acf4035b.scope - libcontainer container 89804f7eb8386dd4f82db9c95b93fa76dd82cd1263e728e201d0d5e7acf4035b. Sep 5 23:59:05.195908 kubelet[2614]: I0905 23:59:05.195510 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7fh\" (UniqueName: \"kubernetes.io/projected/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-kube-api-access-6r7fh\") pod \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\" (UID: \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\") " Sep 5 23:59:05.195908 kubelet[2614]: I0905 23:59:05.195603 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-calico-apiserver-certs\") pod \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\" (UID: \"cbe0b2e4-eb4a-4cc8-acdc-005b19facc59\") " Sep 5 23:59:05.202364 kubelet[2614]: I0905 23:59:05.201997 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "cbe0b2e4-eb4a-4cc8-acdc-005b19facc59" (UID: "cbe0b2e4-eb4a-4cc8-acdc-005b19facc59"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:59:05.204005 kubelet[2614]: I0905 23:59:05.203622 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-kube-api-access-6r7fh" (OuterVolumeSpecName: "kube-api-access-6r7fh") pod "cbe0b2e4-eb4a-4cc8-acdc-005b19facc59" (UID: "cbe0b2e4-eb4a-4cc8-acdc-005b19facc59"). InnerVolumeSpecName "kube-api-access-6r7fh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:59:05.230619 containerd[1480]: time="2025-09-05T23:59:05.230311533Z" level=info msg="StartContainer for \"89804f7eb8386dd4f82db9c95b93fa76dd82cd1263e728e201d0d5e7acf4035b\" returns successfully" Sep 5 23:59:05.279003 systemd[1]: run-netns-cni\x2da5e1cad0\x2d8fbf\x2db28a\x2dcd47\x2d514d324e2162.mount: Deactivated successfully. Sep 5 23:59:05.279113 systemd[1]: var-lib-kubelet-pods-cbe0b2e4\x2deb4a\x2d4cc8\x2dacdc\x2d005b19facc59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6r7fh.mount: Deactivated successfully. Sep 5 23:59:05.279172 systemd[1]: var-lib-kubelet-pods-cbe0b2e4\x2deb4a\x2d4cc8\x2dacdc\x2d005b19facc59-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 5 23:59:05.296797 kubelet[2614]: I0905 23:59:05.296724 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6r7fh\" (UniqueName: \"kubernetes.io/projected/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-kube-api-access-6r7fh\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:59:05.296797 kubelet[2614]: I0905 23:59:05.296761 2614 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59-calico-apiserver-certs\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:59:05.459664 kubelet[2614]: I0905 23:59:05.458258 2614 scope.go:117] "RemoveContainer" containerID="512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88" Sep 5 23:59:05.462512 systemd[1]: Removed slice kubepods-besteffort-podcbe0b2e4_eb4a_4cc8_acdc_005b19facc59.slice - libcontainer container kubepods-besteffort-podcbe0b2e4_eb4a_4cc8_acdc_005b19facc59.slice. Sep 5 23:59:05.462652 systemd[1]: kubepods-besteffort-podcbe0b2e4_eb4a_4cc8_acdc_005b19facc59.slice: Consumed 1.213s CPU time. Sep 5 23:59:05.463786 containerd[1480]: time="2025-09-05T23:59:05.463741940Z" level=info msg="RemoveContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\"" Sep 5 23:59:05.469375 containerd[1480]: time="2025-09-05T23:59:05.469328189Z" level=info msg="RemoveContainer for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" returns successfully" Sep 5 23:59:05.471953 kubelet[2614]: I0905 23:59:05.471926 2614 scope.go:117] "RemoveContainer" containerID="512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88" Sep 5 23:59:05.473552 containerd[1480]: time="2025-09-05T23:59:05.473489355Z" level=error msg="ContainerStatus for \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\": not found" Sep 5 23:59:05.473764 kubelet[2614]: E0905 23:59:05.473734 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\": not found" containerID="512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88" Sep 5 23:59:05.473823 kubelet[2614]: I0905 23:59:05.473775 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88"} err="failed to get container status \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\": rpc error: code = NotFound desc = an error occurred when try to find container \"512e2c512b888ff5267b3761f0d58eb82480ac4756f3de575cdaf9c6c5cbcc88\": not found" Sep 5 23:59:05.480341 kubelet[2614]: I0905 23:59:05.480062 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-685cf96569-h8f9r" podStartSLOduration=1.480041381 podStartE2EDuration="1.480041381s" podCreationTimestamp="2025-09-05 23:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:59:05.479018641 +0000 UTC m=+67.718171030" watchObservedRunningTime="2025-09-05 23:59:05.480041381 +0000 UTC m=+67.719193770" Sep 5 23:59:05.910371 kubelet[2614]: I0905 23:59:05.910301 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe0b2e4-eb4a-4cc8-acdc-005b19facc59" path="/var/lib/kubelet/pods/cbe0b2e4-eb4a-4cc8-acdc-005b19facc59/volumes" Sep 5 23:59:05.997389 systemd-networkd[1376]: cali973abd9345b: Gained IPv6LL Sep 5 23:59:06.920282 containerd[1480]: time="2025-09-05T23:59:06.919852124Z" level=info msg="StopContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" with timeout 30 (s)" Sep 5 23:59:06.922498 containerd[1480]: time="2025-09-05T23:59:06.921413533Z" level=info msg="Stop container \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" with signal terminated" Sep 5 23:59:06.997636 systemd[1]: cri-containerd-2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384.scope: Deactivated successfully. Sep 5 23:59:07.039329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384-rootfs.mount: Deactivated successfully. Sep 5 23:59:07.046298 containerd[1480]: time="2025-09-05T23:59:07.045777840Z" level=info msg="shim disconnected" id=2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384 namespace=k8s.io Sep 5 23:59:07.046298 containerd[1480]: time="2025-09-05T23:59:07.045837243Z" level=warning msg="cleaning up after shim disconnected" id=2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384 namespace=k8s.io Sep 5 23:59:07.046298 containerd[1480]: time="2025-09-05T23:59:07.045847444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:59:07.076264 containerd[1480]: time="2025-09-05T23:59:07.075899147Z" level=info msg="StopContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" returns successfully" Sep 5 23:59:07.076992 containerd[1480]: time="2025-09-05T23:59:07.076913044Z" level=info msg="StopPodSandbox for \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\"" Sep 5 23:59:07.076992 containerd[1480]: time="2025-09-05T23:59:07.076978127Z" level=info msg="Container to stop \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:59:07.081784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f-shm.mount: Deactivated successfully. Sep 5 23:59:07.101727 systemd[1]: cri-containerd-961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f.scope: Deactivated successfully. Sep 5 23:59:07.133880 containerd[1480]: time="2025-09-05T23:59:07.133804752Z" level=info msg="shim disconnected" id=961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f namespace=k8s.io Sep 5 23:59:07.133880 containerd[1480]: time="2025-09-05T23:59:07.133872196Z" level=warning msg="cleaning up after shim disconnected" id=961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f namespace=k8s.io Sep 5 23:59:07.133880 containerd[1480]: time="2025-09-05T23:59:07.133882797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:59:07.137098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f-rootfs.mount: Deactivated successfully. Sep 5 23:59:07.158338 containerd[1480]: time="2025-09-05T23:59:07.158272987Z" level=warning msg="cleanup warnings time=\"2025-09-05T23:59:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 23:59:07.256302 systemd-networkd[1376]: calid2a7ce0290a: Link DOWN Sep 5 23:59:07.256315 systemd-networkd[1376]: calid2a7ce0290a: Lost carrier Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.252 [INFO][6303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.253 [INFO][6303] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" iface="eth0" netns="/var/run/netns/cni-711017fc-1a1e-62e2-8d35-c5b4d7ac0ae4" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.255 [INFO][6303] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" iface="eth0" netns="/var/run/netns/cni-711017fc-1a1e-62e2-8d35-c5b4d7ac0ae4" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.271 [INFO][6303] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" after=18.388578ms iface="eth0" netns="/var/run/netns/cni-711017fc-1a1e-62e2-8d35-c5b4d7ac0ae4" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.271 [INFO][6303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.274 [INFO][6303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.312 [INFO][6311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.313 [INFO][6311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.313 [INFO][6311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.390 [INFO][6311] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.390 [INFO][6311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.397 [INFO][6311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:59:07.402535 containerd[1480]: 2025-09-05 23:59:07.398 [INFO][6303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 5 23:59:07.403590 containerd[1480]: time="2025-09-05T23:59:07.402880006Z" level=info msg="TearDown network for sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" successfully" Sep 5 23:59:07.403590 containerd[1480]: time="2025-09-05T23:59:07.402908847Z" level=info msg="StopPodSandbox for \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" returns successfully" Sep 5 23:59:07.408328 systemd[1]: run-netns-cni\x2d711017fc\x2d1a1e\x2d62e2\x2d8d35\x2dc5b4d7ac0ae4.mount: Deactivated successfully. Sep 5 23:59:07.471043 kubelet[2614]: I0905 23:59:07.470943 2614 scope.go:117] "RemoveContainer" containerID="2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384" Sep 5 23:59:07.473032 containerd[1480]: time="2025-09-05T23:59:07.472943684Z" level=info msg="RemoveContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\"" Sep 5 23:59:07.479023 containerd[1480]: time="2025-09-05T23:59:07.478558954Z" level=info msg="RemoveContainer for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" returns successfully" Sep 5 23:59:07.480098 kubelet[2614]: I0905 23:59:07.479956 2614 scope.go:117] "RemoveContainer" containerID="2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384" Sep 5 23:59:07.481089 containerd[1480]: time="2025-09-05T23:59:07.480634549Z" level=error msg="ContainerStatus for \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\": not found" Sep 5 23:59:07.481277 kubelet[2614]: E0905 23:59:07.481007 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\": not found" containerID="2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384" Sep 5 23:59:07.481277 kubelet[2614]: I0905 23:59:07.481038 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384"} err="failed to get container status \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a1a8e15ba5f377cdff15b1ab3bceb1fa98c24b70cf09c7bedaaf288ee562384\": not found" Sep 5 23:59:07.513996 kubelet[2614]: I0905 23:59:07.513348 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9xx\" (UniqueName: \"kubernetes.io/projected/d5bacc15-80ca-43c3-bafd-f08e810b113d-kube-api-access-hr9xx\") pod \"d5bacc15-80ca-43c3-bafd-f08e810b113d\" (UID: \"d5bacc15-80ca-43c3-bafd-f08e810b113d\") " Sep 5 23:59:07.513996 kubelet[2614]: I0905 23:59:07.513408 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5bacc15-80ca-43c3-bafd-f08e810b113d-calico-apiserver-certs\") pod \"d5bacc15-80ca-43c3-bafd-f08e810b113d\" (UID: \"d5bacc15-80ca-43c3-bafd-f08e810b113d\") " Sep 5 23:59:07.523139 systemd[1]: var-lib-kubelet-pods-d5bacc15\x2d80ca\x2d43c3\x2dbafd\x2df08e810b113d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr9xx.mount: Deactivated successfully. Sep 5 23:59:07.524919 kubelet[2614]: I0905 23:59:07.524852 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bacc15-80ca-43c3-bafd-f08e810b113d-kube-api-access-hr9xx" (OuterVolumeSpecName: "kube-api-access-hr9xx") pod "d5bacc15-80ca-43c3-bafd-f08e810b113d" (UID: "d5bacc15-80ca-43c3-bafd-f08e810b113d"). InnerVolumeSpecName "kube-api-access-hr9xx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:59:07.526502 kubelet[2614]: I0905 23:59:07.526399 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bacc15-80ca-43c3-bafd-f08e810b113d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d5bacc15-80ca-43c3-bafd-f08e810b113d" (UID: "d5bacc15-80ca-43c3-bafd-f08e810b113d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:59:07.614498 kubelet[2614]: I0905 23:59:07.614088 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hr9xx\" (UniqueName: \"kubernetes.io/projected/d5bacc15-80ca-43c3-bafd-f08e810b113d-kube-api-access-hr9xx\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:59:07.614498 kubelet[2614]: I0905 23:59:07.614147 2614 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5bacc15-80ca-43c3-bafd-f08e810b113d-calico-apiserver-certs\") on node \"ci-4081-3-5-n-4ef3874a70\" DevicePath \"\"" Sep 5 23:59:07.778669 systemd[1]: Removed slice kubepods-besteffort-podd5bacc15_80ca_43c3_bafd_f08e810b113d.slice - libcontainer container kubepods-besteffort-podd5bacc15_80ca_43c3_bafd_f08e810b113d.slice. Sep 5 23:59:07.913310 kubelet[2614]: I0905 23:59:07.913241 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5bacc15-80ca-43c3-bafd-f08e810b113d" path="/var/lib/kubelet/pods/d5bacc15-80ca-43c3-bafd-f08e810b113d/volumes" Sep 5 23:59:08.038999 systemd[1]: var-lib-kubelet-pods-d5bacc15\x2d80ca\x2d43c3\x2dbafd\x2df08e810b113d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 5 23:59:48.409105 systemd[1]: Started sshd@8-138.199.175.7:22-120.46.163.82:27239.service - OpenSSH per-connection server daemon (120.46.163.82:27239). Sep 5 23:59:48.532685 sshd[6430]: Connection reset by 120.46.163.82 port 27239 [preauth] Sep 5 23:59:48.536021 systemd[1]: sshd@8-138.199.175.7:22-120.46.163.82:27239.service: Deactivated successfully. Sep 6 00:00:00.954361 containerd[1480]: time="2025-09-06T00:00:00.954235102Z" level=info msg="StopPodSandbox for \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\"" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.000 [WARNING][6528] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.000 [INFO][6528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.000 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" iface="eth0" netns="" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.000 [INFO][6528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.000 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.024 [INFO][6535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.024 [INFO][6535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.024 [INFO][6535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.036 [WARNING][6535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.037 [INFO][6535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.039 [INFO][6535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:01.043491 containerd[1480]: 2025-09-06 00:00:01.041 [INFO][6528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.043491 containerd[1480]: time="2025-09-06T00:00:01.043249006Z" level=info msg="TearDown network for sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" successfully" Sep 6 00:00:01.043491 containerd[1480]: time="2025-09-06T00:00:01.043290007Z" level=info msg="StopPodSandbox for \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" returns successfully" Sep 6 00:00:01.044534 containerd[1480]: time="2025-09-06T00:00:01.044199576Z" level=info msg="RemovePodSandbox for \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\"" Sep 6 00:00:01.044534 containerd[1480]: time="2025-09-06T00:00:01.044240256Z" level=info msg="Forcibly stopping sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\"" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.093 [WARNING][6549] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.094 [INFO][6549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.094 [INFO][6549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" iface="eth0" netns="" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.094 [INFO][6549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.094 [INFO][6549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.120 [INFO][6556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.120 [INFO][6556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.121 [INFO][6556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.132 [WARNING][6556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.132 [INFO][6556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" HandleID="k8s-pod-network.c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--pmqf8-eth0" Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.135 [INFO][6556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:01.138930 containerd[1480]: 2025-09-06 00:00:01.136 [INFO][6549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b" Sep 6 00:00:01.140519 containerd[1480]: time="2025-09-06T00:00:01.139839971Z" level=info msg="TearDown network for sandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" successfully" Sep 6 00:00:01.146316 containerd[1480]: time="2025-09-06T00:00:01.146050513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:00:01.146316 containerd[1480]: time="2025-09-06T00:00:01.146154354Z" level=info msg="RemovePodSandbox \"c77f2c86fc5198c86459e0f5c112f7b51c3321e27c5a09c270e438175a9bd10b\" returns successfully" Sep 6 00:00:01.147022 containerd[1480]: time="2025-09-06T00:00:01.146918842Z" level=info msg="StopPodSandbox for \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\"" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.189 [WARNING][6571] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.190 [INFO][6571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.190 [INFO][6571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" iface="eth0" netns="" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.190 [INFO][6571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.190 [INFO][6571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.214 [INFO][6578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.214 [INFO][6578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.215 [INFO][6578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.225 [WARNING][6578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.225 [INFO][6578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.228 [INFO][6578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:01.232788 containerd[1480]: 2025-09-06 00:00:01.230 [INFO][6571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.232788 containerd[1480]: time="2025-09-06T00:00:01.232356215Z" level=info msg="TearDown network for sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" successfully" Sep 6 00:00:01.232788 containerd[1480]: time="2025-09-06T00:00:01.232390455Z" level=info msg="StopPodSandbox for \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" returns successfully" Sep 6 00:00:01.235128 containerd[1480]: time="2025-09-06T00:00:01.235088362Z" level=info msg="RemovePodSandbox for \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\"" Sep 6 00:00:01.235256 containerd[1480]: time="2025-09-06T00:00:01.235136843Z" level=info msg="Forcibly stopping sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\"" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.287 [WARNING][6592] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" WorkloadEndpoint="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.288 [INFO][6592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.288 [INFO][6592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" iface="eth0" netns="" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.288 [INFO][6592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.288 [INFO][6592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.317 [INFO][6599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.317 [INFO][6599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.317 [INFO][6599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.332 [WARNING][6599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.332 [INFO][6599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" HandleID="k8s-pod-network.961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Workload="ci--4081--3--5--n--4ef3874a70-k8s-calico--apiserver--648b95987d--x5bx7-eth0" Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.335 [INFO][6599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:01.339051 containerd[1480]: 2025-09-06 00:00:01.337 [INFO][6592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f" Sep 6 00:00:01.339701 containerd[1480]: time="2025-09-06T00:00:01.339097201Z" level=info msg="TearDown network for sandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" successfully" Sep 6 00:00:01.344587 containerd[1480]: time="2025-09-06T00:00:01.344522335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:00:01.346260 containerd[1480]: time="2025-09-06T00:00:01.344631656Z" level=info msg="RemovePodSandbox \"961690611b50eab575376469e77c70d101f30500168d28bc4c2c710c70ab487f\" returns successfully" Sep 6 00:00:11.213757 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 6 00:00:11.236863 systemd[1]: logrotate.service: Deactivated successfully. Sep 6 00:00:14.920639 update_engine[1460]: I20250906 00:00:14.920549 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:00:14.920639 update_engine[1460]: I20250906 00:00:14.920624 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:00:14.921254 update_engine[1460]: I20250906 00:00:14.920994 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:00:14.923432 update_engine[1460]: I20250906 00:00:14.923362 1460 omaha_request_params.cc:62] Current group set to lts Sep 6 00:00:14.923565 update_engine[1460]: I20250906 00:00:14.923533 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:00:14.923565 update_engine[1460]: I20250906 00:00:14.923547 1460 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:00:14.923638 update_engine[1460]: I20250906 00:00:14.923568 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:00:14.924747 update_engine[1460]: I20250906 00:00:14.924705 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:00:14.930485 update_engine[1460]: I20250906 00:00:14.926116 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 6 00:00:14.930485 update_engine[1460]: I20250906 00:00:14.926152 1460 omaha_request_action.cc:272] Request: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: Sep 6 00:00:14.930485 update_engine[1460]: I20250906 00:00:14.926162 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:00:14.930485 update_engine[1460]: I20250906 00:00:14.929278 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:00:14.930485 update_engine[1460]: I20250906 00:00:14.929656 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:00:14.932549 update_engine[1460]: E20250906 00:00:14.932515 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:00:14.932873 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:00:14.933114 update_engine[1460]: I20250906 00:00:14.932808 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:00:21.346613 systemd[1]: run-containerd-runc-k8s.io-532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71-runc.jXtgY5.mount: Deactivated successfully. Sep 6 00:00:24.857548 update_engine[1460]: I20250906 00:00:24.856925 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:00:24.857548 update_engine[1460]: I20250906 00:00:24.857208 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:00:24.858223 update_engine[1460]: I20250906 00:00:24.858168 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:00:24.859006 update_engine[1460]: E20250906 00:00:24.858906 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:00:24.859006 update_engine[1460]: I20250906 00:00:24.858979 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 00:00:34.863689 update_engine[1460]: I20250906 00:00:34.863500 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:00:34.864197 update_engine[1460]: I20250906 00:00:34.863968 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:00:34.864592 update_engine[1460]: I20250906 00:00:34.864400 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:00:34.865205 update_engine[1460]: E20250906 00:00:34.865163 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:00:34.865285 update_engine[1460]: I20250906 00:00:34.865243 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 00:00:35.136931 systemd[1]: Started sshd@9-138.199.175.7:22-139.178.68.195:34016.service - OpenSSH per-connection server daemon (139.178.68.195:34016). Sep 6 00:00:36.135079 sshd[6698]: Accepted publickey for core from 139.178.68.195 port 34016 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:36.137089 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:36.146140 systemd-logind[1457]: New session 8 of user core. Sep 6 00:00:36.149745 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 00:00:36.946912 sshd[6698]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:36.952492 systemd[1]: sshd@9-138.199.175.7:22-139.178.68.195:34016.service: Deactivated successfully. Sep 6 00:00:36.955952 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:00:36.959060 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:00:36.961412 systemd-logind[1457]: Removed session 8. Sep 6 00:00:42.128147 systemd[1]: Started sshd@10-138.199.175.7:22-139.178.68.195:43484.service - OpenSSH per-connection server daemon (139.178.68.195:43484). Sep 6 00:00:43.130938 sshd[6732]: Accepted publickey for core from 139.178.68.195 port 43484 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:43.133945 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:43.139981 systemd-logind[1457]: New session 9 of user core. Sep 6 00:00:43.146726 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 00:00:43.894545 sshd[6732]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:43.900588 systemd[1]: sshd@10-138.199.175.7:22-139.178.68.195:43484.service: Deactivated successfully. Sep 6 00:00:43.903578 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:00:43.906751 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:00:43.908218 systemd-logind[1457]: Removed session 9. Sep 6 00:00:44.854928 update_engine[1460]: I20250906 00:00:44.854373 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:00:44.854928 update_engine[1460]: I20250906 00:00:44.854815 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:00:44.855937 update_engine[1460]: I20250906 00:00:44.855883 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:00:44.856833 update_engine[1460]: E20250906 00:00:44.856788 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:00:44.856911 update_engine[1460]: I20250906 00:00:44.856881 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:00:44.856911 update_engine[1460]: I20250906 00:00:44.856896 1460 omaha_request_action.cc:617] Omaha request response: Sep 6 00:00:44.857029 update_engine[1460]: E20250906 00:00:44.857005 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 6 00:00:44.857070 update_engine[1460]: I20250906 00:00:44.857046 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 6 00:00:44.857070 update_engine[1460]: I20250906 00:00:44.857059 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:00:44.857123 update_engine[1460]: I20250906 00:00:44.857067 1460 update_attempter.cc:306] Processing Done. Sep 6 00:00:44.857123 update_engine[1460]: E20250906 00:00:44.857086 1460 update_attempter.cc:619] Update failed. Sep 6 00:00:44.857123 update_engine[1460]: I20250906 00:00:44.857095 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 6 00:00:44.857123 update_engine[1460]: I20250906 00:00:44.857103 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 6 00:00:44.857123 update_engine[1460]: I20250906 00:00:44.857112 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 6 00:00:44.857707 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 6 00:00:44.858006 update_engine[1460]: I20250906 00:00:44.857700 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:00:44.858006 update_engine[1460]: I20250906 00:00:44.857754 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 6 00:00:44.858006 update_engine[1460]: I20250906 00:00:44.857760 1460 omaha_request_action.cc:272] Request: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: Sep 6 00:00:44.858006 update_engine[1460]: I20250906 00:00:44.857766 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:00:44.858006 update_engine[1460]: I20250906 00:00:44.857949 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:00:44.858263 update_engine[1460]: I20250906 00:00:44.858134 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:00:44.859496 update_engine[1460]: E20250906 00:00:44.859442 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859511 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859524 1460 omaha_request_action.cc:617] Omaha request response: Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859534 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859539 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859544 1460 update_attempter.cc:306] Processing Done. Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859549 1460 update_attempter.cc:310] Error event sent. Sep 6 00:00:44.859591 update_engine[1460]: I20250906 00:00:44.859575 1460 update_check_scheduler.cc:74] Next update check in 44m32s Sep 6 00:00:44.859936 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 6 00:00:49.075927 systemd[1]: Started sshd@11-138.199.175.7:22-139.178.68.195:43496.service - OpenSSH per-connection server daemon (139.178.68.195:43496). Sep 6 00:00:50.069494 sshd[6764]: Accepted publickey for core from 139.178.68.195 port 43496 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:50.071625 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:50.076898 systemd-logind[1457]: New session 10 of user core. Sep 6 00:00:50.080718 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 00:00:50.832561 sshd[6764]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:50.838560 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:00:50.838826 systemd[1]: sshd@11-138.199.175.7:22-139.178.68.195:43496.service: Deactivated successfully. Sep 6 00:00:50.843654 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:00:50.845098 systemd-logind[1457]: Removed session 10. Sep 6 00:00:51.008769 systemd[1]: Started sshd@12-138.199.175.7:22-139.178.68.195:37138.service - OpenSSH per-connection server daemon (139.178.68.195:37138). Sep 6 00:00:51.355808 systemd[1]: run-containerd-runc-k8s.io-532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71-runc.KwQHKK.mount: Deactivated successfully. Sep 6 00:00:52.016020 sshd[6778]: Accepted publickey for core from 139.178.68.195 port 37138 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:52.020149 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:52.027493 systemd-logind[1457]: New session 11 of user core. Sep 6 00:00:52.034179 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 00:00:52.819745 sshd[6778]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:52.827373 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:00:52.829356 systemd[1]: sshd@12-138.199.175.7:22-139.178.68.195:37138.service: Deactivated successfully. Sep 6 00:00:52.833854 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:00:52.837283 systemd-logind[1457]: Removed session 11. Sep 6 00:00:52.999181 systemd[1]: Started sshd@13-138.199.175.7:22-139.178.68.195:37150.service - OpenSSH per-connection server daemon (139.178.68.195:37150). Sep 6 00:00:53.997795 sshd[6808]: Accepted publickey for core from 139.178.68.195 port 37150 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:53.999463 sshd[6808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:54.004179 systemd-logind[1457]: New session 12 of user core. Sep 6 00:00:54.011802 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 00:00:54.765263 sshd[6808]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:54.770411 systemd[1]: sshd@13-138.199.175.7:22-139.178.68.195:37150.service: Deactivated successfully. Sep 6 00:00:54.773821 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:00:54.776136 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:00:54.777858 systemd-logind[1457]: Removed session 12. Sep 6 00:00:59.944902 systemd[1]: Started sshd@14-138.199.175.7:22-139.178.68.195:37152.service - OpenSSH per-connection server daemon (139.178.68.195:37152). Sep 6 00:01:00.936598 sshd[6862]: Accepted publickey for core from 139.178.68.195 port 37152 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:00.938683 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:00.943952 systemd-logind[1457]: New session 13 of user core. Sep 6 00:01:00.950678 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 00:01:01.700293 sshd[6862]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:01.704858 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:01:01.704970 systemd[1]: sshd@14-138.199.175.7:22-139.178.68.195:37152.service: Deactivated successfully. Sep 6 00:01:01.708243 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:01:01.711233 systemd-logind[1457]: Removed session 13. Sep 6 00:01:06.880137 systemd[1]: Started sshd@15-138.199.175.7:22-139.178.68.195:56254.service - OpenSSH per-connection server daemon (139.178.68.195:56254). Sep 6 00:01:07.873772 sshd[6881]: Accepted publickey for core from 139.178.68.195 port 56254 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:07.875807 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:07.886037 systemd-logind[1457]: New session 14 of user core. Sep 6 00:01:07.889714 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 00:01:08.671019 sshd[6881]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:08.680380 systemd[1]: sshd@15-138.199.175.7:22-139.178.68.195:56254.service: Deactivated successfully. Sep 6 00:01:08.682620 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:01:08.683412 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:01:08.684754 systemd-logind[1457]: Removed session 14. Sep 6 00:01:13.857178 systemd[1]: Started sshd@16-138.199.175.7:22-139.178.68.195:44830.service - OpenSSH per-connection server daemon (139.178.68.195:44830). Sep 6 00:01:14.937805 sshd[6914]: Accepted publickey for core from 139.178.68.195 port 44830 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:14.940026 sshd[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:14.946046 systemd-logind[1457]: New session 15 of user core. Sep 6 00:01:14.953847 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 00:01:15.742384 sshd[6914]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:15.747411 systemd[1]: sshd@16-138.199.175.7:22-139.178.68.195:44830.service: Deactivated successfully. Sep 6 00:01:15.749710 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:01:15.751071 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:01:15.752233 systemd-logind[1457]: Removed session 15. Sep 6 00:01:20.943891 systemd[1]: Started sshd@17-138.199.175.7:22-139.178.68.195:56998.service - OpenSSH per-connection server daemon (139.178.68.195:56998). Sep 6 00:01:21.364189 systemd[1]: run-containerd-runc-k8s.io-532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71-runc.gQ4vgb.mount: Deactivated successfully. Sep 6 00:01:21.990691 sshd[6933]: Accepted publickey for core from 139.178.68.195 port 56998 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:21.993376 sshd[6933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:21.999601 systemd-logind[1457]: New session 16 of user core. Sep 6 00:01:22.004670 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 00:01:22.798006 sshd[6933]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:22.802479 systemd[1]: sshd@17-138.199.175.7:22-139.178.68.195:56998.service: Deactivated successfully. Sep 6 00:01:22.805887 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:01:22.808696 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:01:22.809917 systemd-logind[1457]: Removed session 16. Sep 6 00:01:22.976196 systemd[1]: Started sshd@18-138.199.175.7:22-139.178.68.195:57014.service - OpenSSH per-connection server daemon (139.178.68.195:57014). Sep 6 00:01:24.043239 sshd[6964]: Accepted publickey for core from 139.178.68.195 port 57014 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:24.047252 sshd[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:24.057708 systemd-logind[1457]: New session 17 of user core. Sep 6 00:01:24.070776 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 00:01:25.007273 sshd[6964]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:25.013723 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:01:25.013725 systemd[1]: sshd@18-138.199.175.7:22-139.178.68.195:57014.service: Deactivated successfully. Sep 6 00:01:25.017507 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:01:25.019509 systemd-logind[1457]: Removed session 17. Sep 6 00:01:25.183871 systemd[1]: Started sshd@19-138.199.175.7:22-139.178.68.195:57028.service - OpenSSH per-connection server daemon (139.178.68.195:57028). Sep 6 00:01:26.196686 sshd[6975]: Accepted publickey for core from 139.178.68.195 port 57028 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:26.199695 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:26.205809 systemd-logind[1457]: New session 18 of user core. Sep 6 00:01:26.209863 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 00:01:26.392111 systemd[1]: run-containerd-runc-k8s.io-0a71309c5fa637d0177d62df08e4832899ddcfef1f34c0759910724792bcbe58-runc.S1lEW9.mount: Deactivated successfully. Sep 6 00:01:27.581628 sshd[6975]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:27.586783 systemd[1]: sshd@19-138.199.175.7:22-139.178.68.195:57028.service: Deactivated successfully. Sep 6 00:01:27.588990 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:01:27.589927 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:01:27.591379 systemd-logind[1457]: Removed session 18. Sep 6 00:01:27.757743 systemd[1]: Started sshd@20-138.199.175.7:22-139.178.68.195:57042.service - OpenSSH per-connection server daemon (139.178.68.195:57042). Sep 6 00:01:28.783577 sshd[7012]: Accepted publickey for core from 139.178.68.195 port 57042 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:28.786560 sshd[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:28.793406 systemd-logind[1457]: New session 19 of user core. Sep 6 00:01:28.799717 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 00:01:29.693078 sshd[7012]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:29.701817 systemd[1]: sshd@20-138.199.175.7:22-139.178.68.195:57042.service: Deactivated successfully. Sep 6 00:01:29.706004 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:01:29.707153 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:01:29.709867 systemd-logind[1457]: Removed session 19. Sep 6 00:01:29.885359 systemd[1]: Started sshd@21-138.199.175.7:22-139.178.68.195:57044.service - OpenSSH per-connection server daemon (139.178.68.195:57044). Sep 6 00:01:30.952524 sshd[7023]: Accepted publickey for core from 139.178.68.195 port 57044 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:30.955478 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:30.962521 systemd-logind[1457]: New session 20 of user core. Sep 6 00:01:30.973781 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 00:01:31.757491 sshd[7023]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:31.762925 systemd[1]: sshd@21-138.199.175.7:22-139.178.68.195:57044.service: Deactivated successfully. Sep 6 00:01:31.766850 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:01:31.768220 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:01:31.769606 systemd-logind[1457]: Removed session 20. Sep 6 00:01:36.944069 systemd[1]: Started sshd@22-138.199.175.7:22-139.178.68.195:45780.service - OpenSSH per-connection server daemon (139.178.68.195:45780). Sep 6 00:01:37.996366 sshd[7040]: Accepted publickey for core from 139.178.68.195 port 45780 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:37.999553 sshd[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:38.018564 systemd-logind[1457]: New session 21 of user core. Sep 6 00:01:38.020680 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 00:01:38.812826 sshd[7040]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:38.818897 systemd[1]: sshd@22-138.199.175.7:22-139.178.68.195:45780.service: Deactivated successfully. Sep 6 00:01:38.825323 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:01:38.827716 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:01:38.828714 systemd-logind[1457]: Removed session 21. Sep 6 00:01:41.218474 systemd[1]: run-containerd-runc-k8s.io-5d47bc6c50b3798a0fdbc119fe0745cc0a199c3f9d94e5636cb790c39cdaedac-runc.0Rs4an.mount: Deactivated successfully. Sep 6 00:01:44.001790 systemd[1]: Started sshd@23-138.199.175.7:22-139.178.68.195:57098.service - OpenSSH per-connection server daemon (139.178.68.195:57098). Sep 6 00:01:45.053067 sshd[7079]: Accepted publickey for core from 139.178.68.195 port 57098 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:45.055543 sshd[7079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:45.063135 systemd-logind[1457]: New session 22 of user core. Sep 6 00:01:45.067699 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 00:01:45.858820 sshd[7079]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:45.863620 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:01:45.864170 systemd[1]: sshd@23-138.199.175.7:22-139.178.68.195:57098.service: Deactivated successfully. Sep 6 00:01:45.868072 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:01:45.872415 systemd-logind[1457]: Removed session 22. Sep 6 00:01:51.356602 systemd[1]: run-containerd-runc-k8s.io-532ee49194b1e24d777a829b44c9dee8a2f93f13b5d7067db2651770047dce71-runc.SjJ6vH.mount: Deactivated successfully. Sep 6 00:02:00.329645 systemd[1]: cri-containerd-a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af.scope: Deactivated successfully. Sep 6 00:02:00.331585 systemd[1]: cri-containerd-a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af.scope: Consumed 5.204s CPU time, 16.8M memory peak, 0B memory swap peak. Sep 6 00:02:00.359146 containerd[1480]: time="2025-09-06T00:02:00.358774076Z" level=info msg="shim disconnected" id=a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af namespace=k8s.io Sep 6 00:02:00.359146 containerd[1480]: time="2025-09-06T00:02:00.358918196Z" level=warning msg="cleaning up after shim disconnected" id=a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af namespace=k8s.io Sep 6 00:02:00.359146 containerd[1480]: time="2025-09-06T00:02:00.358927276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:02:00.364318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af-rootfs.mount: Deactivated successfully. Sep 6 00:02:00.402353 kubelet[2614]: E0906 00:02:00.400645 2614 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47844->10.0.0.2:2379: read: connection timed out" Sep 6 00:02:00.409605 systemd[1]: cri-containerd-c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3.scope: Deactivated successfully. Sep 6 00:02:00.409883 systemd[1]: cri-containerd-c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3.scope: Consumed 4.122s CPU time, 16.2M memory peak, 0B memory swap peak. Sep 6 00:02:00.434145 containerd[1480]: time="2025-09-06T00:02:00.434076116Z" level=info msg="shim disconnected" id=c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3 namespace=k8s.io Sep 6 00:02:00.434145 containerd[1480]: time="2025-09-06T00:02:00.434134756Z" level=warning msg="cleaning up after shim disconnected" id=c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3 namespace=k8s.io Sep 6 00:02:00.434145 containerd[1480]: time="2025-09-06T00:02:00.434145236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:02:00.436641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3-rootfs.mount: Deactivated successfully. Sep 6 00:02:00.992103 kubelet[2614]: I0906 00:02:00.990627 2614 scope.go:117] "RemoveContainer" containerID="a58d30f39771d8454118993e71329f699ea49bb2be359990f13da06db4d8b6af" Sep 6 00:02:00.995148 kubelet[2614]: I0906 00:02:00.995103 2614 scope.go:117] "RemoveContainer" containerID="c1cbaac61716451a10acf31c5bcfa5c048a4e8bf5203706f9654cddc4c767cd3" Sep 6 00:02:00.996928 containerd[1480]: time="2025-09-06T00:02:00.996801825Z" level=info msg="CreateContainer within sandbox \"f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 6 00:02:00.997312 containerd[1480]: time="2025-09-06T00:02:00.997174907Z" level=info msg="CreateContainer within sandbox \"abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 6 00:02:01.015886 containerd[1480]: time="2025-09-06T00:02:01.015841185Z" level=info msg="CreateContainer within sandbox \"f0c2f5c8ea470c17cfc0bb4659e8e3c2259cde69f3d7594b0dfb32296cb8093f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"899b49fa62e9e1070e9ab95ea8f4ffc443f6bf57864589436a0f5f1c8d016283\"" Sep 6 00:02:01.016613 containerd[1480]: time="2025-09-06T00:02:01.016589908Z" level=info msg="StartContainer for \"899b49fa62e9e1070e9ab95ea8f4ffc443f6bf57864589436a0f5f1c8d016283\"" Sep 6 00:02:01.020579 containerd[1480]: time="2025-09-06T00:02:01.020534605Z" level=info msg="CreateContainer within sandbox \"abf49184476e54f5ee3c04384e46cf7e03d3aa0780b65b566737343cc783c62c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a2a762bb730dda2ca1b21d41f9a4e8332d1fd66c234664d01388b6822b843ed8\"" Sep 6 00:02:01.021307 containerd[1480]: time="2025-09-06T00:02:01.021189408Z" level=info msg="StartContainer for \"a2a762bb730dda2ca1b21d41f9a4e8332d1fd66c234664d01388b6822b843ed8\"" Sep 6 00:02:01.056680 systemd[1]: Started cri-containerd-899b49fa62e9e1070e9ab95ea8f4ffc443f6bf57864589436a0f5f1c8d016283.scope - libcontainer container 899b49fa62e9e1070e9ab95ea8f4ffc443f6bf57864589436a0f5f1c8d016283. Sep 6 00:02:01.064641 systemd[1]: Started cri-containerd-a2a762bb730dda2ca1b21d41f9a4e8332d1fd66c234664d01388b6822b843ed8.scope - libcontainer container a2a762bb730dda2ca1b21d41f9a4e8332d1fd66c234664d01388b6822b843ed8. Sep 6 00:02:01.108471 containerd[1480]: time="2025-09-06T00:02:01.107316889Z" level=info msg="StartContainer for \"899b49fa62e9e1070e9ab95ea8f4ffc443f6bf57864589436a0f5f1c8d016283\" returns successfully" Sep 6 00:02:01.119702 containerd[1480]: time="2025-09-06T00:02:01.119525340Z" level=info msg="StartContainer for \"a2a762bb730dda2ca1b21d41f9a4e8332d1fd66c234664d01388b6822b843ed8\" returns successfully" Sep 6 00:02:01.816520 systemd[1]: cri-containerd-9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302.scope: Deactivated successfully. Sep 6 00:02:01.816807 systemd[1]: cri-containerd-9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302.scope: Consumed 23.637s CPU time. Sep 6 00:02:01.854576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302-rootfs.mount: Deactivated successfully. Sep 6 00:02:01.855033 containerd[1480]: time="2025-09-06T00:02:01.854972868Z" level=info msg="shim disconnected" id=9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302 namespace=k8s.io Sep 6 00:02:01.855033 containerd[1480]: time="2025-09-06T00:02:01.855029868Z" level=warning msg="cleaning up after shim disconnected" id=9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302 namespace=k8s.io Sep 6 00:02:01.855416 containerd[1480]: time="2025-09-06T00:02:01.855038388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:02:02.007518 kubelet[2614]: I0906 00:02:02.006989 2614 scope.go:117] "RemoveContainer" containerID="9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302" Sep 6 00:02:02.010657 containerd[1480]: time="2025-09-06T00:02:02.010132239Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 6 00:02:02.031384 containerd[1480]: time="2025-09-06T00:02:02.031335327Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c\"" Sep 6 00:02:02.032002 containerd[1480]: time="2025-09-06T00:02:02.031976770Z" level=info msg="StartContainer for \"c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c\"" Sep 6 00:02:02.071649 systemd[1]: Started cri-containerd-c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c.scope - libcontainer container c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c. Sep 6 00:02:02.284544 containerd[1480]: time="2025-09-06T00:02:02.284396138Z" level=info msg="StartContainer for \"c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c\" returns successfully" Sep 6 00:02:05.270633 kubelet[2614]: E0906 00:02:05.268739 2614 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47664->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-4ef3874a70.1862888641578aff kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-4ef3874a70,UID:e541c017e1943efa824c8b3337db0933,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-4ef3874a70,},FirstTimestamp:2025-09-06 00:01:54.831166207 +0000 UTC m=+237.070318756,LastTimestamp:2025-09-06 00:01:54.831166207 +0000 UTC m=+237.070318756,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-4ef3874a70,}" Sep 6 00:02:10.402749 kubelet[2614]: E0906 00:02:10.402648 2614 controller.go:195] "Failed to update lease" err="Put \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": context deadline exceeded" Sep 6 00:02:11.470944 kubelet[2614]: I0906 00:02:11.470867 2614 status_manager.go:895] "Failed to get status for pod" podUID="54e3e6c9d7915cef527fc424824822a9" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-4ef3874a70" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47774->10.0.0.2:2379: read: connection timed out" Sep 6 00:02:13.740073 systemd[1]: cri-containerd-c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c.scope: Deactivated successfully. Sep 6 00:02:13.767902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c-rootfs.mount: Deactivated successfully. Sep 6 00:02:13.774485 containerd[1480]: time="2025-09-06T00:02:13.774361911Z" level=info msg="shim disconnected" id=c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c namespace=k8s.io Sep 6 00:02:13.774982 containerd[1480]: time="2025-09-06T00:02:13.774503711Z" level=warning msg="cleaning up after shim disconnected" id=c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c namespace=k8s.io Sep 6 00:02:13.774982 containerd[1480]: time="2025-09-06T00:02:13.774520512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:02:14.046556 kubelet[2614]: I0906 00:02:14.045640 2614 scope.go:117] "RemoveContainer" containerID="9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302" Sep 6 00:02:14.046556 kubelet[2614]: I0906 00:02:14.045979 2614 scope.go:117] "RemoveContainer" containerID="c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c" Sep 6 00:02:14.046556 kubelet[2614]: E0906 00:02:14.046155 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-495rq_tigera-operator(4fcac0f3-c46b-4f92-8ccf-7c7226bbb022)\"" pod="tigera-operator/tigera-operator-755d956888-495rq" podUID="4fcac0f3-c46b-4f92-8ccf-7c7226bbb022" Sep 6 00:02:14.048026 containerd[1480]: time="2025-09-06T00:02:14.047993236Z" level=info msg="RemoveContainer for \"9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302\"" Sep 6 00:02:14.052544 containerd[1480]: time="2025-09-06T00:02:14.052499532Z" level=info msg="RemoveContainer for \"9a470646f05b9fc0f9fa9653629966ba178732421e96bc5df79e837bb6c30302\" returns successfully" Sep 6 00:02:20.403305 kubelet[2614]: E0906 00:02:20.403196 2614 controller.go:195] "Failed to update lease" err="Put \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:02:27.905698 kubelet[2614]: I0906 00:02:27.905199 2614 scope.go:117] "RemoveContainer" containerID="c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c" Sep 6 00:02:27.908822 containerd[1480]: time="2025-09-06T00:02:27.908742155Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Sep 6 00:02:27.924557 containerd[1480]: time="2025-09-06T00:02:27.924506285Z" level=info msg="CreateContainer within sandbox \"066e40582f2800203f6c9de101445a8e8d37334bda940458b4a9ae164b1c90aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296\"" Sep 6 00:02:27.925957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125735179.mount: Deactivated successfully. Sep 6 00:02:27.926347 containerd[1480]: time="2025-09-06T00:02:27.926113130Z" level=info msg="StartContainer for \"fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296\"" Sep 6 00:02:27.965782 systemd[1]: Started cri-containerd-fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296.scope - libcontainer container fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296. Sep 6 00:02:28.000546 containerd[1480]: time="2025-09-06T00:02:27.998113119Z" level=info msg="StartContainer for \"fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296\" returns successfully" Sep 6 00:02:30.405572 kubelet[2614]: E0906 00:02:30.404495 2614 controller.go:195] "Failed to update lease" err="Put \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": context deadline exceeded" Sep 6 00:02:39.220568 systemd[1]: cri-containerd-fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296.scope: Deactivated successfully. Sep 6 00:02:39.242737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296-rootfs.mount: Deactivated successfully. Sep 6 00:02:39.247508 containerd[1480]: time="2025-09-06T00:02:39.247445969Z" level=info msg="shim disconnected" id=fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296 namespace=k8s.io Sep 6 00:02:39.247508 containerd[1480]: time="2025-09-06T00:02:39.247500130Z" level=warning msg="cleaning up after shim disconnected" id=fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296 namespace=k8s.io Sep 6 00:02:39.247508 containerd[1480]: time="2025-09-06T00:02:39.247509610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:02:39.273571 kubelet[2614]: E0906 00:02:39.273082 2614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-4ef3874a70.1862888641578aff kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-4ef3874a70,UID:e541c017e1943efa824c8b3337db0933,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-4ef3874a70,},FirstTimestamp:2025-09-06 00:01:54.831166207 +0000 UTC m=+237.070318756,LastTimestamp:2025-09-06 00:01:58.843278377 +0000 UTC m=+241.082430806,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-4ef3874a70,}" Sep 6 00:02:40.123118 kubelet[2614]: I0906 00:02:40.123033 2614 scope.go:117] "RemoveContainer" containerID="c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c" Sep 6 00:02:40.123578 kubelet[2614]: I0906 00:02:40.123465 2614 scope.go:117] "RemoveContainer" containerID="fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296" Sep 6 00:02:40.124484 kubelet[2614]: E0906 00:02:40.123947 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-755d956888-495rq_tigera-operator(4fcac0f3-c46b-4f92-8ccf-7c7226bbb022)\"" pod="tigera-operator/tigera-operator-755d956888-495rq" podUID="4fcac0f3-c46b-4f92-8ccf-7c7226bbb022" Sep 6 00:02:40.126133 containerd[1480]: time="2025-09-06T00:02:40.125744019Z" level=info msg="RemoveContainer for \"c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c\"" Sep 6 00:02:40.130475 containerd[1480]: time="2025-09-06T00:02:40.130322192Z" level=info msg="RemoveContainer for \"c3f254157137ddbb130e8489cc5f2c900dfcb8243249c844e2e109c6c5ebcc1c\" returns successfully" Sep 6 00:02:40.405866 kubelet[2614]: E0906 00:02:40.405473 2614 controller.go:195] "Failed to update lease" err="Put \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:02:40.405866 kubelet[2614]: I0906 00:02:40.405538 2614 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 6 00:02:50.406105 kubelet[2614]: E0906 00:02:50.405815 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.175.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-4ef3874a70?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Sep 6 00:02:52.905479 kubelet[2614]: I0906 00:02:52.905141 2614 scope.go:117] "RemoveContainer" containerID="fabcd2e8b494976d4fcaa71c3dbff160ae995acef8951b65fff5c9a0bed2b296" Sep 6 00:02:52.905479 kubelet[2614]: E0906 00:02:52.905466 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-755d956888-495rq_tigera-operator(4fcac0f3-c46b-4f92-8ccf-7c7226bbb022)\"" pod="tigera-operator/tigera-operator-755d956888-495rq" podUID="4fcac0f3-c46b-4f92-8ccf-7c7226bbb022"