Sep 5 23:50:09.915423 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:50:09.915456 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:50:09.915468 kernel: KASLR enabled Sep 5 23:50:09.915474 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 5 23:50:09.915481 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 5 23:50:09.915486 kernel: random: crng init done Sep 5 23:50:09.915493 kernel: ACPI: Early table checksum verification disabled Sep 5 23:50:09.915499 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 5 23:50:09.915505 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:50:09.915513 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915520 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915525 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915532 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915563 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915571 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915580 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915587 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915593 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.915600 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 23:50:09.915606 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 5 23:50:09.915612 kernel: NUMA: Failed to initialise from firmware Sep 5 23:50:09.915619 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:09.915625 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 5 23:50:09.915632 kernel: Zone ranges: Sep 5 23:50:09.915638 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:50:09.915684 kernel: DMA32 empty Sep 5 23:50:09.915691 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 5 23:50:09.915698 kernel: Movable zone start for each node Sep 5 23:50:09.915704 kernel: Early memory node ranges Sep 5 23:50:09.915710 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 5 23:50:09.915717 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 5 23:50:09.915723 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 5 23:50:09.915730 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 5 23:50:09.915736 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 5 23:50:09.915743 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 5 23:50:09.915749 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 5 23:50:09.915756 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:09.915765 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 5 23:50:09.915771 kernel: psci: probing for conduit method from ACPI. Sep 5 23:50:09.915777 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:50:09.915787 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:50:09.915793 kernel: psci: Trusted OS migration not required Sep 5 23:50:09.915800 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:50:09.915808 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:50:09.915816 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:50:09.915822 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:50:09.915829 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:50:09.915836 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:50:09.915843 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:50:09.915849 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:50:09.915856 kernel: CPU features: detected: Spectre-v4 Sep 5 23:50:09.915863 kernel: CPU features: detected: Spectre-BHB Sep 5 23:50:09.915870 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:50:09.915878 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:50:09.915885 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:50:09.915892 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:50:09.915898 kernel: alternatives: applying boot alternatives Sep 5 23:50:09.915907 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:09.915914 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:50:09.915921 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:50:09.915928 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:50:09.915934 kernel: Fallback order for Node 0: 0 Sep 5 23:50:09.915941 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 5 23:50:09.915948 kernel: Policy zone: Normal Sep 5 23:50:09.915957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:50:09.915963 kernel: software IO TLB: area num 2. Sep 5 23:50:09.915970 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 5 23:50:09.915977 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Sep 5 23:50:09.915984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:50:09.915991 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:50:09.915999 kernel: rcu: RCU event tracing is enabled. Sep 5 23:50:09.916006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:50:09.916013 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:50:09.916020 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:50:09.916027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:50:09.916035 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:50:09.916042 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:50:09.916048 kernel: GICv3: 256 SPIs implemented Sep 5 23:50:09.916055 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:50:09.916062 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:50:09.916069 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:50:09.916075 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:50:09.916082 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:50:09.916089 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:50:09.916097 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:50:09.916104 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 5 23:50:09.916110 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 5 23:50:09.916119 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:50:09.916126 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:09.916133 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:50:09.916140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:50:09.916147 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:50:09.916154 kernel: Console: colour dummy device 80x25 Sep 5 23:50:09.916161 kernel: ACPI: Core revision 20230628 Sep 5 23:50:09.916169 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:50:09.916176 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:50:09.916183 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:50:09.916192 kernel: landlock: Up and running. Sep 5 23:50:09.916198 kernel: SELinux: Initializing. Sep 5 23:50:09.916205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.916213 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.916221 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:09.916228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:09.916235 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:50:09.916242 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:50:09.916249 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:50:09.916258 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:50:09.916265 kernel: Remapping and enabling EFI services. Sep 5 23:50:09.916273 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:50:09.916280 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:50:09.916287 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:50:09.916294 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 5 23:50:09.916302 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:09.916308 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:50:09.916315 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:50:09.916322 kernel: SMP: Total of 2 processors activated. Sep 5 23:50:09.916331 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:50:09.916339 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:50:09.916354 kernel: CPU features: detected: Common not Private translations Sep 5 23:50:09.916363 kernel: CPU features: detected: CRC32 instructions Sep 5 23:50:09.916370 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:50:09.916378 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:50:09.916385 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:50:09.916393 kernel: CPU features: detected: Privileged Access Never Sep 5 23:50:09.916400 kernel: CPU features: detected: RAS Extension Support Sep 5 23:50:09.916410 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:50:09.916418 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:50:09.916425 kernel: alternatives: applying system-wide alternatives Sep 5 23:50:09.916432 kernel: devtmpfs: initialized Sep 5 23:50:09.916440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:50:09.916451 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:50:09.916459 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:50:09.916468 kernel: SMBIOS 3.0.0 present. Sep 5 23:50:09.916475 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 5 23:50:09.916483 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:50:09.916490 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:50:09.916498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:50:09.916506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:50:09.916513 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:50:09.916521 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Sep 5 23:50:09.916529 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:50:09.917024 kernel: cpuidle: using governor menu Sep 5 23:50:09.917038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:50:09.917045 kernel: ASID allocator initialised with 32768 entries Sep 5 23:50:09.917053 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:50:09.917060 kernel: Serial: AMBA PL011 UART driver Sep 5 23:50:09.917068 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:50:09.917076 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:50:09.917083 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:50:09.917091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:50:09.917102 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:50:09.917110 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:50:09.917118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:50:09.917126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:50:09.917133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:50:09.917141 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:50:09.917148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:50:09.917155 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:50:09.917163 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:50:09.917173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:50:09.917180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:50:09.917188 kernel: ACPI: Interpreter enabled Sep 5 23:50:09.917196 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:50:09.917203 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:50:09.917211 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:50:09.917218 kernel: printk: console [ttyAMA0] enabled Sep 5 23:50:09.917226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:50:09.917413 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:50:09.917499 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:50:09.917713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:50:09.917785 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:50:09.917881 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:50:09.917894 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:50:09.917902 kernel: PCI host bridge to bus 0000:00 Sep 5 23:50:09.917985 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:50:09.918058 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:50:09.918116 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:09.918173 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:50:09.918262 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:50:09.918349 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 5 23:50:09.918482 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 5 23:50:09.918612 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:09.918752 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.918826 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 5 23:50:09.918902 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.918972 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 5 23:50:09.919046 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919113 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 5 23:50:09.919194 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919259 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 5 23:50:09.919334 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919400 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 5 23:50:09.919473 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919575 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 5 23:50:09.919665 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919731 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 5 23:50:09.919812 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.919878 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 5 23:50:09.919952 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.920018 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 5 23:50:09.920102 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 5 23:50:09.920169 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 5 23:50:09.920249 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:09.920317 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 5 23:50:09.920385 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:09.920452 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:09.920532 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 5 23:50:09.920620 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 5 23:50:09.920744 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 5 23:50:09.920822 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 5 23:50:09.920893 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 5 23:50:09.920974 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 5 23:50:09.921042 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 5 23:50:09.921127 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 5 23:50:09.921195 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 5 23:50:09.921272 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 5 23:50:09.921340 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 5 23:50:09.921408 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:09.921497 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:09.921616 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 5 23:50:09.921708 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 5 23:50:09.921778 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:09.921851 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 5 23:50:09.921917 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:09.921983 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:09.922059 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 5 23:50:09.922126 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 5 23:50:09.922190 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 5 23:50:09.922258 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 23:50:09.922324 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:09.922390 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:09.922460 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 23:50:09.922525 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 5 23:50:09.923345 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 5 23:50:09.923423 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 23:50:09.923488 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 5 23:50:09.924117 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 5 23:50:09.924225 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 23:50:09.924294 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:09.924362 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:09.924456 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 23:50:09.924531 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:09.925069 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:09.925146 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 23:50:09.925214 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:09.925284 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:09.925356 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 23:50:09.925421 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:09.925493 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:09.925592 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 5 23:50:09.925683 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.925760 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 5 23:50:09.925833 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.925906 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 5 23:50:09.925972 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.926048 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 5 23:50:09.926115 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.926184 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 5 23:50:09.926250 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.926320 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.926385 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.926455 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.926521 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.926606 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.926721 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.926794 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 5 23:50:09.926860 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.926932 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 5 23:50:09.927005 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 5 23:50:09.927071 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 5 23:50:09.927136 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 5 23:50:09.927200 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 5 23:50:09.927267 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 5 23:50:09.927333 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 5 23:50:09.927398 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 5 23:50:09.927469 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 5 23:50:09.927570 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 5 23:50:09.927657 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 5 23:50:09.927731 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 5 23:50:09.927800 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 5 23:50:09.927866 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 5 23:50:09.927934 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 5 23:50:09.928000 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 5 23:50:09.928069 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 5 23:50:09.928141 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 5 23:50:09.928207 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 5 23:50:09.928275 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 5 23:50:09.928357 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 5 23:50:09.928437 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 5 23:50:09.928505 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:09.928590 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 5 23:50:09.928676 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 5 23:50:09.928752 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 23:50:09.928817 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 5 23:50:09.928884 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.928959 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 5 23:50:09.929046 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 5 23:50:09.929133 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 23:50:09.929212 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 5 23:50:09.929277 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.931769 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:09.931890 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 5 23:50:09.931966 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 5 23:50:09.932034 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 23:50:09.932126 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 5 23:50:09.932206 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.932296 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:09.932371 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 5 23:50:09.932440 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 23:50:09.932506 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 5 23:50:09.932589 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.932684 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 5 23:50:09.932766 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 5 23:50:09.932832 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 23:50:09.932898 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 5 23:50:09.932962 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.933040 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 5 23:50:09.933108 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 5 23:50:09.933178 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 5 23:50:09.933244 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 23:50:09.933314 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.933380 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.933458 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 5 23:50:09.933528 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 5 23:50:09.934991 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 5 23:50:09.935073 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 5 23:50:09.935139 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 23:50:09.935200 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.935273 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.935343 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 5 23:50:09.935407 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 23:50:09.935470 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.936553 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.936752 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 5 23:50:09.936823 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 5 23:50:09.936886 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 5 23:50:09.936962 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.937039 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:50:09.937103 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:50:09.937160 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:09.937246 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 23:50:09.937313 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 5 23:50:09.937377 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.937462 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 5 23:50:09.937526 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 5 23:50:09.938980 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.939077 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 5 23:50:09.939139 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 5 23:50:09.939198 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.939280 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 23:50:09.939341 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 5 23:50:09.939418 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.939511 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 5 23:50:09.940740 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 5 23:50:09.940830 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.940910 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 5 23:50:09.940980 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.941038 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.941111 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 5 23:50:09.941171 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.941236 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.941316 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 5 23:50:09.941385 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.941455 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.941523 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 5 23:50:09.942719 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 5 23:50:09.942803 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.942824 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:50:09.942832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:50:09.942844 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:50:09.942852 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:50:09.942860 kernel: iommu: Default domain type: Translated Sep 5 23:50:09.942868 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:50:09.942876 kernel: efivars: Registered efivars operations Sep 5 23:50:09.942884 kernel: vgaarb: loaded Sep 5 23:50:09.942892 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:50:09.942902 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:50:09.942911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:50:09.942919 kernel: pnp: PnP ACPI init Sep 5 23:50:09.943011 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:50:09.943023 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:50:09.943031 kernel: NET: Registered PF_INET protocol family Sep 5 23:50:09.943039 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:50:09.943047 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:50:09.943059 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:50:09.943067 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:50:09.943075 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:50:09.943085 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:50:09.943093 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.943101 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.943109 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:50:09.943198 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.943211 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:50:09.943222 kernel: kvm [1]: HYP mode not available Sep 5 23:50:09.943230 kernel: Initialise system trusted keyrings Sep 5 23:50:09.943238 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:50:09.943245 kernel: Key type asymmetric registered Sep 5 23:50:09.943253 kernel: Asymmetric key parser 'x509' registered Sep 5 23:50:09.943261 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:50:09.943269 kernel: io scheduler mq-deadline registered Sep 5 23:50:09.943277 kernel: io scheduler kyber registered Sep 5 23:50:09.943284 kernel: io scheduler bfq registered Sep 5 23:50:09.943295 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:50:09.943372 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 5 23:50:09.943446 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 5 23:50:09.943523 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.944755 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 5 23:50:09.944837 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 5 23:50:09.944902 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.944982 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 5 23:50:09.945048 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 5 23:50:09.945115 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.945186 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 5 23:50:09.945253 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 5 23:50:09.945320 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.945390 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 5 23:50:09.945457 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 5 23:50:09.945523 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.946630 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 5 23:50:09.946773 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 5 23:50:09.946851 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.946923 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 5 23:50:09.946988 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 5 23:50:09.947052 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.947125 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 5 23:50:09.947192 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 5 23:50:09.947262 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.947273 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 5 23:50:09.947342 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 5 23:50:09.947410 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 5 23:50:09.947478 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.947489 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:50:09.947497 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:50:09.947506 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:50:09.948768 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.948877 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.948901 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:50:09.948910 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:50:09.948984 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 5 23:50:09.948996 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 5 23:50:09.949004 kernel: thunder_xcv, ver 1.0 Sep 5 23:50:09.949012 kernel: thunder_bgx, ver 1.0 Sep 5 23:50:09.949023 kernel: nicpf, ver 1.0 Sep 5 23:50:09.949031 kernel: nicvf, ver 1.0 Sep 5 23:50:09.949117 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:50:09.949180 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:50:09 UTC (1757116209) Sep 5 23:50:09.949191 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:50:09.949199 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:50:09.949207 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:50:09.949215 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:50:09.949226 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:50:09.949234 kernel: Segment Routing with IPv6 Sep 5 23:50:09.949242 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:50:09.949250 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:50:09.949259 kernel: Key type dns_resolver registered Sep 5 23:50:09.949267 kernel: registered taskstats version 1 Sep 5 23:50:09.949275 kernel: Loading compiled-in X.509 certificates Sep 5 23:50:09.949284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:50:09.949292 kernel: Key type .fscrypt registered Sep 5 23:50:09.949299 kernel: Key type fscrypt-provisioning registered Sep 5 23:50:09.949310 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:50:09.949317 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:50:09.949326 kernel: ima: No architecture policies found Sep 5 23:50:09.949334 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:50:09.949341 kernel: clk: Disabling unused clocks Sep 5 23:50:09.949349 kernel: Freeing unused kernel memory: 39424K Sep 5 23:50:09.949357 kernel: Run /init as init process Sep 5 23:50:09.949365 kernel: with arguments: Sep 5 23:50:09.949375 kernel: /init Sep 5 23:50:09.949383 kernel: with environment: Sep 5 23:50:09.949390 kernel: HOME=/ Sep 5 23:50:09.949398 kernel: TERM=linux Sep 5 23:50:09.949405 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:50:09.949416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:09.949426 systemd[1]: Detected virtualization kvm. Sep 5 23:50:09.949435 systemd[1]: Detected architecture arm64. Sep 5 23:50:09.949445 systemd[1]: Running in initrd. Sep 5 23:50:09.949454 systemd[1]: No hostname configured, using default hostname. Sep 5 23:50:09.949462 systemd[1]: Hostname set to . Sep 5 23:50:09.949470 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:09.949479 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:50:09.949488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:09.949496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:09.949505 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:50:09.949515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:09.949524 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:50:09.949532 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:50:09.949603 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:50:09.949612 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:50:09.949621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:09.949629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:09.949642 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:09.949686 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:09.949695 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:09.949703 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:09.949714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:09.949723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:09.949732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:50:09.949740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:50:09.949750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:09.949759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:09.949767 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:09.949775 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:09.949784 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:50:09.949793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:09.949801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:50:09.949809 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:50:09.949818 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:09.949828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:09.949836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:09.949844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:09.949853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:09.949892 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 23:50:09.949916 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:50:09.949925 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:09.949934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:09.949945 systemd-journald[237]: Journal started Sep 5 23:50:09.949965 systemd-journald[237]: Runtime Journal (/run/log/journal/a528e777ff9f49f0bf762bbcac6692b4) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:09.930937 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 23:50:09.952558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:50:09.955055 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:09.962641 kernel: Bridge firewalling registered Sep 5 23:50:09.961753 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 23:50:09.965962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:09.970250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:09.972073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:09.973737 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:09.983840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:09.990103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:10.001207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:10.012158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:10.014501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:10.023060 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:50:10.026774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:10.030804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:10.040131 dracut-cmdline[272]: dracut-dracut-053 Sep 5 23:50:10.045311 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:10.073298 systemd-resolved[273]: Positive Trust Anchors: Sep 5 23:50:10.074214 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:10.074252 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:10.085123 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 5 23:50:10.087143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:10.087993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:10.143624 kernel: SCSI subsystem initialized Sep 5 23:50:10.148573 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:50:10.157584 kernel: iscsi: registered transport (tcp) Sep 5 23:50:10.171603 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:50:10.171733 kernel: QLogic iSCSI HBA Driver Sep 5 23:50:10.227719 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:10.233814 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:50:10.257712 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:50:10.257807 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:50:10.258848 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:50:10.318581 kernel: raid6: neonx8 gen() 15689 MB/s Sep 5 23:50:10.335579 kernel: raid6: neonx4 gen() 15580 MB/s Sep 5 23:50:10.352620 kernel: raid6: neonx2 gen() 13107 MB/s Sep 5 23:50:10.369633 kernel: raid6: neonx1 gen() 10429 MB/s Sep 5 23:50:10.386605 kernel: raid6: int64x8 gen() 6915 MB/s Sep 5 23:50:10.403623 kernel: raid6: int64x4 gen() 7316 MB/s Sep 5 23:50:10.420589 kernel: raid6: int64x2 gen() 6092 MB/s Sep 5 23:50:10.437635 kernel: raid6: int64x1 gen() 5024 MB/s Sep 5 23:50:10.437742 kernel: raid6: using algorithm neonx8 gen() 15689 MB/s Sep 5 23:50:10.454619 kernel: raid6: .... xor() 11979 MB/s, rmw enabled Sep 5 23:50:10.454734 kernel: raid6: using neon recovery algorithm Sep 5 23:50:10.459852 kernel: xor: measuring software checksum speed Sep 5 23:50:10.459928 kernel: 8regs : 19617 MB/sec Sep 5 23:50:10.460773 kernel: 32regs : 19622 MB/sec Sep 5 23:50:10.460826 kernel: arm64_neon : 26822 MB/sec Sep 5 23:50:10.460847 kernel: xor: using function: arm64_neon (26822 MB/sec) Sep 5 23:50:10.515598 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:50:10.534259 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:10.540794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:10.566661 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 5 23:50:10.570336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:10.583030 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:50:10.599705 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 5 23:50:10.644199 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:10.649849 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:10.719366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:10.729133 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:50:10.764270 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:10.767159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:10.768295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:10.769724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:10.782978 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:50:10.807709 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:10.873581 kernel: scsi host0: Virtio SCSI HBA Sep 5 23:50:10.876724 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:10.878557 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:10.880582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:10.880734 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:10.883949 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:10.884880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:10.885098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:10.887945 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:10.894865 kernel: ACPI: bus type USB registered Sep 5 23:50:10.899879 kernel: usbcore: registered new interface driver usbfs Sep 5 23:50:10.899948 kernel: usbcore: registered new interface driver hub Sep 5 23:50:10.900231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:10.917600 kernel: usbcore: registered new device driver usb Sep 5 23:50:10.933443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:10.937563 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 5 23:50:10.942348 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:10.942718 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 5 23:50:10.942855 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 5 23:50:10.942945 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 23:50:10.944586 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:10.944833 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:50:10.945838 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 5 23:50:10.946034 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 5 23:50:10.946214 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:10.948851 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 5 23:50:10.950559 kernel: hub 1-0:1.0: USB hub found Sep 5 23:50:10.950826 kernel: hub 1-0:1.0: 4 ports detected Sep 5 23:50:10.953939 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 23:50:10.954212 kernel: hub 2-0:1.0: USB hub found Sep 5 23:50:10.954940 kernel: hub 2-0:1.0: 4 ports detected Sep 5 23:50:10.964195 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 5 23:50:10.965561 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 5 23:50:10.965872 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 5 23:50:10.965972 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 5 23:50:10.967560 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 23:50:10.972514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:50:10.972647 kernel: GPT:17805311 != 80003071 Sep 5 23:50:10.972661 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:50:10.972673 kernel: GPT:17805311 != 80003071 Sep 5 23:50:10.972684 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:50:10.972694 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:10.974588 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 5 23:50:10.983226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:11.032591 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (521) Sep 5 23:50:11.036106 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (507) Sep 5 23:50:11.040164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 5 23:50:11.046913 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 5 23:50:11.056702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:11.063258 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 5 23:50:11.064201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 5 23:50:11.073072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:50:11.087608 disk-uuid[571]: Primary Header is updated. Sep 5 23:50:11.087608 disk-uuid[571]: Secondary Entries is updated. Sep 5 23:50:11.087608 disk-uuid[571]: Secondary Header is updated. Sep 5 23:50:11.099582 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:11.109581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:11.194673 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 23:50:11.332860 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 5 23:50:11.332930 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 5 23:50:11.333112 kernel: usbcore: registered new interface driver usbhid Sep 5 23:50:11.333124 kernel: usbhid: USB HID core driver Sep 5 23:50:11.440576 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 5 23:50:11.570589 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 5 23:50:11.624626 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 5 23:50:12.121676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:12.122320 disk-uuid[573]: The operation has completed successfully. Sep 5 23:50:12.195325 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:50:12.195452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:50:12.203915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:50:12.211768 sh[589]: Success Sep 5 23:50:12.226613 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:50:12.302435 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:50:12.306738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:50:12.308582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:50:12.340186 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:50:12.340277 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.340291 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:50:12.341012 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:50:12.342114 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:50:12.351597 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:50:12.355674 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:50:12.356437 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:50:12.362854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:50:12.364651 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:50:12.388030 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.388135 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.388963 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:12.395605 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:12.395716 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:12.412120 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:50:12.413797 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.421025 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:50:12.428912 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:50:12.546981 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:12.550338 ignition[683]: Ignition 2.19.0 Sep 5 23:50:12.551028 ignition[683]: Stage: fetch-offline Sep 5 23:50:12.551097 ignition[683]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.551108 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.551366 ignition[683]: parsed url from cmdline: "" Sep 5 23:50:12.551371 ignition[683]: no config URL provided Sep 5 23:50:12.551376 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:12.551387 ignition[683]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:12.551393 ignition[683]: failed to fetch config: resource requires networking Sep 5 23:50:12.551701 ignition[683]: Ignition finished successfully Sep 5 23:50:12.558899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:12.564013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:12.581239 systemd-networkd[777]: lo: Link UP Sep 5 23:50:12.581259 systemd-networkd[777]: lo: Gained carrier Sep 5 23:50:12.583144 systemd-networkd[777]: Enumeration completed Sep 5 23:50:12.583426 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:12.584309 systemd[1]: Reached target network.target - Network. Sep 5 23:50:12.586154 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.586157 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:12.587151 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.587155 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:12.588130 systemd-networkd[777]: eth0: Link UP Sep 5 23:50:12.588134 systemd-networkd[777]: eth0: Gained carrier Sep 5 23:50:12.588144 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.595074 systemd-networkd[777]: eth1: Link UP Sep 5 23:50:12.595079 systemd-networkd[777]: eth1: Gained carrier Sep 5 23:50:12.595092 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.595903 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:50:12.613118 ignition[780]: Ignition 2.19.0 Sep 5 23:50:12.613136 ignition[780]: Stage: fetch Sep 5 23:50:12.613428 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.613443 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.614662 ignition[780]: parsed url from cmdline: "" Sep 5 23:50:12.614669 ignition[780]: no config URL provided Sep 5 23:50:12.614681 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:12.614698 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:12.614731 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 5 23:50:12.615455 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 5 23:50:12.633688 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:12.667665 systemd-networkd[777]: eth0: DHCPv4 address 91.98.45.119/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:12.815686 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 5 23:50:12.821671 ignition[780]: GET result: OK Sep 5 23:50:12.821911 ignition[780]: parsing config with SHA512: 8512b0ea91d6de2b0d611e7ff9024fab2ca20264eda5f7df80be681fd1098428969d7827644553f88cfc186444b7756ac5b594648829330512f1af13e1141388 Sep 5 23:50:12.827501 unknown[780]: fetched base config from "system" Sep 5 23:50:12.828120 ignition[780]: fetch: fetch complete Sep 5 23:50:12.827523 unknown[780]: fetched base config from "system" Sep 5 23:50:12.828128 ignition[780]: fetch: fetch passed Sep 5 23:50:12.827557 unknown[780]: fetched user config from "hetzner" Sep 5 23:50:12.828198 ignition[780]: Ignition finished successfully Sep 5 23:50:12.831631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:50:12.838819 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:50:12.865250 ignition[788]: Ignition 2.19.0 Sep 5 23:50:12.866470 ignition[788]: Stage: kargs Sep 5 23:50:12.867179 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.867845 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.870616 ignition[788]: kargs: kargs passed Sep 5 23:50:12.870736 ignition[788]: Ignition finished successfully Sep 5 23:50:12.875661 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:50:12.883872 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:50:12.919181 ignition[795]: Ignition 2.19.0 Sep 5 23:50:12.919206 ignition[795]: Stage: disks Sep 5 23:50:12.919635 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.919658 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.921459 ignition[795]: disks: disks passed Sep 5 23:50:12.921560 ignition[795]: Ignition finished successfully Sep 5 23:50:12.923813 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:50:12.924911 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:12.925910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:50:12.927230 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:12.928355 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:12.929670 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:12.934895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:50:12.973508 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 5 23:50:12.978868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:50:12.986783 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:50:13.037587 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:50:13.037187 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:50:13.038724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:13.048799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:13.053746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:50:13.059209 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:50:13.060188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:50:13.060227 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:13.068674 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Sep 5 23:50:13.070162 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:13.070236 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:13.071562 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:13.075567 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:13.075645 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:13.080240 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:50:13.082932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:13.096812 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:50:13.149157 coreos-metadata[813]: Sep 05 23:50:13.149 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 5 23:50:13.152129 coreos-metadata[813]: Sep 05 23:50:13.152 INFO Fetch successful Sep 5 23:50:13.152814 coreos-metadata[813]: Sep 05 23:50:13.152 INFO wrote hostname ci-4081-3-5-n-2b989ca6ad to /sysroot/etc/hostname Sep 5 23:50:13.156095 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:50:13.157031 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:13.164016 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:50:13.170060 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:50:13.176691 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:50:13.308094 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:13.313798 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:50:13.317837 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:50:13.330610 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:13.342693 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:50:13.357666 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:50:13.367413 ignition[929]: INFO : Ignition 2.19.0 Sep 5 23:50:13.367413 ignition[929]: INFO : Stage: mount Sep 5 23:50:13.369756 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:13.369756 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:13.369756 ignition[929]: INFO : mount: mount passed Sep 5 23:50:13.369756 ignition[929]: INFO : Ignition finished successfully Sep 5 23:50:13.371041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:50:13.379898 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:50:13.398937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:13.410907 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Sep 5 23:50:13.411000 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:13.411025 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:13.411841 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:13.415571 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:13.415647 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:13.418654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:13.444042 ignition[958]: INFO : Ignition 2.19.0 Sep 5 23:50:13.445403 ignition[958]: INFO : Stage: files Sep 5 23:50:13.445957 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:13.445957 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:13.447485 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:50:13.448720 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:50:13.448720 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:50:13.452433 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:50:13.454026 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:50:13.454026 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:50:13.452989 unknown[958]: wrote ssh authorized keys file for user: core Sep 5 23:50:13.456903 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:50:13.456903 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:13.620050 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:50:13.958473 systemd-networkd[777]: eth1: Gained IPv6LL Sep 5 23:50:14.086248 systemd-networkd[777]: eth0: Gained IPv6LL Sep 5 23:50:17.370694 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:17.372476 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:17.385532 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:17.385532 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:50:17.385532 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:50:17.385532 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:50:17.385532 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 5 23:50:17.797270 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:50:19.318704 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:50:19.318704 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:19.323950 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:19.323950 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:19.323950 ignition[958]: INFO : files: files passed Sep 5 23:50:19.323950 ignition[958]: INFO : Ignition finished successfully Sep 5 23:50:19.328428 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:50:19.340814 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:50:19.345942 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:50:19.348155 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:50:19.348559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:50:19.370255 initrd-setup-root-after-ignition[987]: grep: Sep 5 23:50:19.370255 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:19.372255 initrd-setup-root-after-ignition[987]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:19.372255 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:19.374617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:19.375741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:50:19.382945 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:50:19.427074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:50:19.427206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:50:19.430046 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:50:19.432476 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:50:19.433250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:50:19.434757 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:50:19.467207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:19.471932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:50:19.497715 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:19.498500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:19.500155 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:50:19.501427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:50:19.501844 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:19.504150 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:50:19.504937 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:50:19.505848 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:50:19.507081 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:19.508187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:19.509421 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:50:19.510428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:19.511580 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:50:19.512645 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:50:19.513723 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:50:19.514587 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:50:19.514781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:19.516355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:19.518683 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:19.519746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:50:19.519874 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:19.520942 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:50:19.521131 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:19.522601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:50:19.522790 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:19.523947 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:50:19.524121 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:50:19.524861 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:50:19.524970 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:19.536628 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:50:19.541472 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:50:19.543139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:50:19.545793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:19.546752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:50:19.548771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:19.558755 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:50:19.559166 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:50:19.563357 ignition[1011]: INFO : Ignition 2.19.0 Sep 5 23:50:19.563357 ignition[1011]: INFO : Stage: umount Sep 5 23:50:19.563357 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:19.563357 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:19.572743 ignition[1011]: INFO : umount: umount passed Sep 5 23:50:19.572743 ignition[1011]: INFO : Ignition finished successfully Sep 5 23:50:19.568880 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:50:19.570479 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:50:19.572524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:50:19.577456 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:50:19.578070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:50:19.579840 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:50:19.579963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:50:19.581195 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:50:19.581248 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:50:19.582173 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:50:19.582215 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:50:19.583136 systemd[1]: Stopped target network.target - Network. Sep 5 23:50:19.584072 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:50:19.584150 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:19.585351 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:50:19.586193 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:50:19.589650 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:19.591967 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:50:19.593018 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:50:19.594215 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:50:19.594269 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:19.595137 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:50:19.595177 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:19.596131 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:50:19.596188 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:50:19.597201 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:50:19.597247 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:19.598247 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:50:19.598294 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:19.599421 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:50:19.600514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:50:19.603622 systemd-networkd[777]: eth0: DHCPv6 lease lost Sep 5 23:50:19.608168 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:50:19.608355 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:50:19.610804 systemd-networkd[777]: eth1: DHCPv6 lease lost Sep 5 23:50:19.614101 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:50:19.614233 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:50:19.616314 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:50:19.616388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:19.622944 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:50:19.623482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:50:19.623610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:19.630266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:50:19.630364 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:19.631528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:50:19.631736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:19.633192 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:50:19.633246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:19.634935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:19.643488 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:50:19.643739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:19.648477 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:50:19.648580 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:19.649225 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:50:19.649272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:19.650981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:50:19.651070 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:19.653114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:50:19.653176 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:19.654995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:19.655065 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:19.662821 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:50:19.663420 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:50:19.663512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:19.665125 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:50:19.665182 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:19.666303 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:50:19.666349 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:19.668371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:19.668427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:19.669633 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:50:19.671578 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:50:19.679944 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:50:19.680062 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:50:19.682001 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:50:19.688836 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:50:19.698764 systemd[1]: Switching root. Sep 5 23:50:19.741195 systemd-journald[237]: Journal stopped Sep 5 23:50:20.742961 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 23:50:20.743045 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:50:20.743064 kernel: SELinux: policy capability open_perms=1 Sep 5 23:50:20.743075 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:50:20.743085 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:50:20.743099 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:50:20.743113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:50:20.743124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:50:20.743133 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:50:20.743144 kernel: audit: type=1403 audit(1757116219.887:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:50:20.743156 systemd[1]: Successfully loaded SELinux policy in 40.635ms. Sep 5 23:50:20.743174 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.464ms. Sep 5 23:50:20.743192 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:20.743203 systemd[1]: Detected virtualization kvm. Sep 5 23:50:20.743216 systemd[1]: Detected architecture arm64. Sep 5 23:50:20.743231 systemd[1]: Detected first boot. Sep 5 23:50:20.743242 systemd[1]: Hostname set to . Sep 5 23:50:20.743255 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:20.743266 zram_generator::config[1054]: No configuration found. Sep 5 23:50:20.743278 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:50:20.743289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:50:20.743301 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:50:20.743315 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:20.743326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:50:20.743337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:50:20.743348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:50:20.743359 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:50:20.743371 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:50:20.743382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:50:20.743393 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:50:20.743406 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:50:20.743418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:20.743429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:20.743440 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:50:20.743451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:50:20.743462 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:50:20.743474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:20.743486 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:50:20.743497 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:20.743510 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:50:20.743522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:50:20.743533 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:20.745672 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:50:20.745687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:20.745708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:20.745720 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:20.745737 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:20.745749 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:50:20.745761 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:50:20.745772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:20.745783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:20.745794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:20.745806 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:50:20.745817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:50:20.745829 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:50:20.745841 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:50:20.745852 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:50:20.745864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:50:20.745874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:50:20.745886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:50:20.745898 systemd[1]: Reached target machines.target - Containers. Sep 5 23:50:20.745909 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:50:20.745920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:20.745935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:20.745949 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:50:20.745960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:20.745971 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:20.745982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:20.745993 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:50:20.746007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:20.746019 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:50:20.746035 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:50:20.746049 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:50:20.746060 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:50:20.746071 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:50:20.746081 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:20.746092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:20.746103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:50:20.746117 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:50:20.746128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:20.746139 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:50:20.746149 kernel: fuse: init (API version 7.39) Sep 5 23:50:20.746161 kernel: loop: module loaded Sep 5 23:50:20.746171 systemd[1]: Stopped verity-setup.service. Sep 5 23:50:20.746182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:50:20.746193 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:50:20.746206 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:50:20.746217 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:50:20.746228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:50:20.746241 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:50:20.746252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:20.746265 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:50:20.746277 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:50:20.746288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:20.746300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:20.746311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:20.746322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:20.746337 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:50:20.746348 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:50:20.746360 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:20.746373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:20.746383 kernel: ACPI: bus type drm_connector registered Sep 5 23:50:20.746394 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:20.746405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:50:20.746416 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:20.746430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:20.746441 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:50:20.746452 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:50:20.746509 systemd-journald[1117]: Collecting audit messages is disabled. Sep 5 23:50:20.746612 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:50:20.746630 systemd-journald[1117]: Journal started Sep 5 23:50:20.746655 systemd-journald[1117]: Runtime Journal (/run/log/journal/a528e777ff9f49f0bf762bbcac6692b4) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:20.418704 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:50:20.443233 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:50:20.443800 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:50:20.750862 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:50:20.750943 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:50:20.754637 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:20.758012 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:50:20.768617 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:50:20.776292 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:50:20.776392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:20.779596 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:50:20.782628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:20.788325 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:50:20.788428 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:20.800716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:20.806679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:50:20.811577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:20.825038 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:20.817605 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:50:20.818532 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:50:20.819702 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:50:20.822131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:50:20.850713 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:50:20.855337 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:50:20.864811 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:50:20.868304 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:50:20.905684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:20.915697 kernel: loop0: detected capacity change from 0 to 8 Sep 5 23:50:20.927892 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:50:20.931188 systemd-journald[1117]: Time spent on flushing to /var/log/journal/a528e777ff9f49f0bf762bbcac6692b4 is 49.396ms for 1132 entries. Sep 5 23:50:20.931188 systemd-journald[1117]: System Journal (/var/log/journal/a528e777ff9f49f0bf762bbcac6692b4) is 8.0M, max 584.8M, 576.8M free. Sep 5 23:50:20.993642 systemd-journald[1117]: Received client request to flush runtime journal. Sep 5 23:50:20.993718 kernel: loop1: detected capacity change from 0 to 207008 Sep 5 23:50:20.956920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:20.972195 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:50:20.999316 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:50:21.005037 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Sep 5 23:50:21.005087 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Sep 5 23:50:21.010122 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:50:21.011473 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:50:21.014088 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 23:50:21.024692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:21.036205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:50:21.057011 kernel: loop2: detected capacity change from 0 to 114328 Sep 5 23:50:21.105644 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 23:50:21.106109 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:50:21.118878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:21.146620 kernel: loop4: detected capacity change from 0 to 8 Sep 5 23:50:21.150135 kernel: loop5: detected capacity change from 0 to 207008 Sep 5 23:50:21.157038 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 5 23:50:21.157129 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 5 23:50:21.166656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:21.175575 kernel: loop6: detected capacity change from 0 to 114328 Sep 5 23:50:21.193665 kernel: loop7: detected capacity change from 0 to 114432 Sep 5 23:50:21.206579 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 5 23:50:21.207117 (sd-merge)[1197]: Merged extensions into '/usr'. Sep 5 23:50:21.216756 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:50:21.216783 systemd[1]: Reloading... Sep 5 23:50:21.343110 zram_generator::config[1224]: No configuration found. Sep 5 23:50:21.470677 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:50:21.530120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:21.578031 systemd[1]: Reloading finished in 360 ms. Sep 5 23:50:21.600525 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:50:21.605420 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:50:21.617520 systemd[1]: Starting ensure-sysext.service... Sep 5 23:50:21.620353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:21.630417 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:50:21.630654 systemd[1]: Reloading... Sep 5 23:50:21.679506 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:50:21.680161 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:50:21.681805 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:50:21.682038 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 5 23:50:21.682096 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 5 23:50:21.688174 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:21.688193 systemd-tmpfiles[1262]: Skipping /boot Sep 5 23:50:21.705224 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:21.705251 systemd-tmpfiles[1262]: Skipping /boot Sep 5 23:50:21.776938 zram_generator::config[1291]: No configuration found. Sep 5 23:50:21.880323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:21.928777 systemd[1]: Reloading finished in 297 ms. Sep 5 23:50:21.951755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:50:21.961372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:21.975037 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:21.979862 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:50:21.986913 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:50:21.992837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:21.995441 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:22.001870 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:50:22.010041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:22.014950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:22.020965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:22.029011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:22.030111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:22.033491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:22.033788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:22.038041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:22.046944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:22.048841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:22.053009 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:50:22.058661 systemd[1]: Finished ensure-sysext.service. Sep 5 23:50:22.067894 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:50:22.082609 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:50:22.092906 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:50:22.106518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:22.108700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:22.114594 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:50:22.118158 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:22.119683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:22.124272 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:22.124481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:22.129747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:50:22.137364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:22.138032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:22.141720 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:50:22.149116 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Sep 5 23:50:22.149129 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:22.149221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:22.149267 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:22.157656 augenrules[1364]: No rules Sep 5 23:50:22.159772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:22.178661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:22.188488 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:22.189776 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:50:22.279739 systemd-networkd[1376]: lo: Link UP Sep 5 23:50:22.280140 systemd-networkd[1376]: lo: Gained carrier Sep 5 23:50:22.280865 systemd-networkd[1376]: Enumeration completed Sep 5 23:50:22.281090 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:22.287068 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:50:22.351109 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:50:22.352005 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:50:22.355506 systemd-resolved[1334]: Positive Trust Anchors: Sep 5 23:50:22.355533 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:22.356429 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:22.363820 systemd-resolved[1334]: Using system hostname 'ci-4081-3-5-n-2b989ca6ad'. Sep 5 23:50:22.367616 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:22.368570 systemd[1]: Reached target network.target - Network. Sep 5 23:50:22.369694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:22.385150 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:50:22.435042 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:22.435195 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:22.436178 systemd-networkd[1376]: eth0: Link UP Sep 5 23:50:22.436316 systemd-networkd[1376]: eth0: Gained carrier Sep 5 23:50:22.436341 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:22.456372 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:22.456382 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:22.457026 systemd-networkd[1376]: eth1: Link UP Sep 5 23:50:22.457030 systemd-networkd[1376]: eth1: Gained carrier Sep 5 23:50:22.457045 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:22.460737 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:50:22.483681 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:22.484354 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 5 23:50:22.499835 systemd-networkd[1376]: eth0: DHCPv4 address 91.98.45.119/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:22.500481 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 5 23:50:22.501142 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 5 23:50:22.508190 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 5 23:50:22.509079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:22.519879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:22.531843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:22.536253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:22.538133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:22.538294 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:22.538870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:22.540646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:22.542456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:22.542999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:22.545600 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 5 23:50:22.545694 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 5 23:50:22.545718 kernel: [drm] features: -context_init Sep 5 23:50:22.547770 kernel: [drm] number of scanouts: 1 Sep 5 23:50:22.547851 kernel: [drm] number of cap sets: 0 Sep 5 23:50:22.553046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:22.554605 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 5 23:50:22.567437 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:22.567643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:22.571061 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1393) Sep 5 23:50:22.571154 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 23:50:22.571001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:22.583743 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 5 23:50:22.642450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:22.654021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:50:22.660889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:22.673269 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:50:22.742854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:22.793102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:50:22.800156 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:50:22.826632 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:22.855928 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:50:22.859961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:22.860709 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:22.861392 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:50:22.862330 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:50:22.863312 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:50:22.864143 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:50:22.864903 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:50:22.865566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:50:22.865604 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:22.866064 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:22.867849 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:50:22.870381 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:50:22.875866 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:50:22.878435 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:50:22.880219 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:50:22.881250 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:22.882245 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:22.883085 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:22.883129 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:22.886737 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:50:22.891469 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:22.891841 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:50:22.894905 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:50:22.900796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:50:22.905928 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:50:22.909766 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:50:22.918787 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:50:22.923801 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:50:22.929399 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 5 23:50:22.934874 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:50:22.939970 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:50:22.949113 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:50:22.952526 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:50:22.954176 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:50:22.959488 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:50:22.964693 dbus-daemon[1446]: [system] SELinux support is enabled Sep 5 23:50:22.965064 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:50:22.976768 jq[1447]: false Sep 5 23:50:22.967776 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:50:22.985672 extend-filesystems[1448]: Found loop4 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found loop5 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found loop6 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found loop7 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda1 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda2 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda3 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found usr Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda4 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda6 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda7 Sep 5 23:50:22.985672 extend-filesystems[1448]: Found sda9 Sep 5 23:50:22.985672 extend-filesystems[1448]: Checking size of /dev/sda9 Sep 5 23:50:23.052782 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 5 23:50:22.991323 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:50:23.053030 extend-filesystems[1448]: Resized partition /dev/sda9 Sep 5 23:50:23.054812 coreos-metadata[1445]: Sep 05 23:50:23.004 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 5 23:50:23.054812 coreos-metadata[1445]: Sep 05 23:50:23.010 INFO Fetch successful Sep 5 23:50:23.054812 coreos-metadata[1445]: Sep 05 23:50:23.010 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 5 23:50:23.054812 coreos-metadata[1445]: Sep 05 23:50:23.013 INFO Fetch successful Sep 5 23:50:23.013956 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:50:23.056381 tar[1463]: linux-arm64/LICENSE Sep 5 23:50:23.056381 tar[1463]: linux-arm64/helm Sep 5 23:50:23.058243 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:50:23.014576 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:50:23.026523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:50:23.029149 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:50:23.064199 jq[1461]: true Sep 5 23:50:23.032445 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:50:23.032479 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:50:23.045186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:50:23.048518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:50:23.060059 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:50:23.075440 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:50:23.075705 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:50:23.114320 jq[1484]: true Sep 5 23:50:23.148085 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1384) Sep 5 23:50:23.151598 update_engine[1459]: I20250905 23:50:23.145459 1459 main.cc:92] Flatcar Update Engine starting Sep 5 23:50:23.169193 update_engine[1459]: I20250905 23:50:23.168987 1459 update_check_scheduler.cc:74] Next update check in 5m57s Sep 5 23:50:23.173553 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:50:23.183486 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:50:23.185791 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:50:23.195901 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:50:23.241605 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 5 23:50:23.256510 systemd-logind[1458]: New seat seat0. Sep 5 23:50:23.262652 extend-filesystems[1472]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 5 23:50:23.262652 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 5 23:50:23.262652 extend-filesystems[1472]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 5 23:50:23.262388 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:50:23.272526 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Sep 5 23:50:23.272526 extend-filesystems[1448]: Found sr0 Sep 5 23:50:23.262406 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 5 23:50:23.262860 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:50:23.269945 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:50:23.270317 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:50:23.277924 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:23.279629 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:50:23.293023 systemd[1]: Starting sshkeys.service... Sep 5 23:50:23.320400 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:50:23.329022 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:50:23.369982 coreos-metadata[1525]: Sep 05 23:50:23.369 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 5 23:50:23.371930 coreos-metadata[1525]: Sep 05 23:50:23.371 INFO Fetch successful Sep 5 23:50:23.373660 unknown[1525]: wrote ssh authorized keys file for user: core Sep 5 23:50:23.386006 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:50:23.412928 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:23.414320 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:50:23.424209 systemd[1]: Finished sshkeys.service. Sep 5 23:50:23.449906 containerd[1476]: time="2025-09-05T23:50:23.449757640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:50:23.511522 containerd[1476]: time="2025-09-05T23:50:23.511462160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.515439 containerd[1476]: time="2025-09-05T23:50:23.514331040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:23.515439 containerd[1476]: time="2025-09-05T23:50:23.514379680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:50:23.515439 containerd[1476]: time="2025-09-05T23:50:23.514400960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:50:23.518809 containerd[1476]: time="2025-09-05T23:50:23.518757440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:50:23.519255 containerd[1476]: time="2025-09-05T23:50:23.519233680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.519435 containerd[1476]: time="2025-09-05T23:50:23.519413000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:23.519636 containerd[1476]: time="2025-09-05T23:50:23.519614760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.519952 containerd[1476]: time="2025-09-05T23:50:23.519923560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:23.520270 containerd[1476]: time="2025-09-05T23:50:23.520248920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.520391 containerd[1476]: time="2025-09-05T23:50:23.520366560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521171720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521305920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521591800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521775400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521792520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521898120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:50:23.522114 containerd[1476]: time="2025-09-05T23:50:23.521948000Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:50:23.529317 containerd[1476]: time="2025-09-05T23:50:23.529256560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:50:23.529685 containerd[1476]: time="2025-09-05T23:50:23.529667480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:50:23.529810 containerd[1476]: time="2025-09-05T23:50:23.529791600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:50:23.529886 containerd[1476]: time="2025-09-05T23:50:23.529872200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.530232800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.530457520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.530825240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.530983960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531003360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531017960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531034760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531048480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531062320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531080040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531097400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531111720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531125480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532311 containerd[1476]: time="2025-09-05T23:50:23.531140440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531163600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531178560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531192480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531207960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531227880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531242640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531256280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531269960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531284080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531317880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531331080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531344440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531357480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531385840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:50:23.532727 containerd[1476]: time="2025-09-05T23:50:23.531414960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.533010 containerd[1476]: time="2025-09-05T23:50:23.531431440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.533010 containerd[1476]: time="2025-09-05T23:50:23.531451560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:50:23.534091 containerd[1476]: time="2025-09-05T23:50:23.534046080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:50:23.534463 containerd[1476]: time="2025-09-05T23:50:23.534440160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:50:23.534585 containerd[1476]: time="2025-09-05T23:50:23.534567720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:50:23.534714 containerd[1476]: time="2025-09-05T23:50:23.534691440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:50:23.534970 containerd[1476]: time="2025-09-05T23:50:23.534955280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.535058 containerd[1476]: time="2025-09-05T23:50:23.535043960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:50:23.535112 containerd[1476]: time="2025-09-05T23:50:23.535100440Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:50:23.535170 containerd[1476]: time="2025-09-05T23:50:23.535157400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:50:23.536816 containerd[1476]: time="2025-09-05T23:50:23.535672240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:50:23.536816 containerd[1476]: time="2025-09-05T23:50:23.535746320Z" level=info msg="Connect containerd service" Sep 5 23:50:23.536816 containerd[1476]: time="2025-09-05T23:50:23.535792480Z" level=info msg="using legacy CRI server" Sep 5 23:50:23.536816 containerd[1476]: time="2025-09-05T23:50:23.535800640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:50:23.536816 containerd[1476]: time="2025-09-05T23:50:23.535912120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:50:23.539062 containerd[1476]: time="2025-09-05T23:50:23.539018560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:50:23.539356 containerd[1476]: time="2025-09-05T23:50:23.539323240Z" level=info msg="Start subscribing containerd event" Sep 5 23:50:23.539558 containerd[1476]: time="2025-09-05T23:50:23.539511520Z" level=info msg="Start recovering state" Sep 5 23:50:23.539872 containerd[1476]: time="2025-09-05T23:50:23.539855640Z" level=info msg="Start event monitor" Sep 5 23:50:23.539991 containerd[1476]: time="2025-09-05T23:50:23.539975360Z" level=info msg="Start snapshots syncer" Sep 5 23:50:23.540054 containerd[1476]: time="2025-09-05T23:50:23.540042400Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:50:23.540825 containerd[1476]: time="2025-09-05T23:50:23.540350400Z" level=info msg="Start streaming server" Sep 5 23:50:23.542435 containerd[1476]: time="2025-09-05T23:50:23.542384440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:50:23.542651 containerd[1476]: time="2025-09-05T23:50:23.542632880Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:50:23.546905 containerd[1476]: time="2025-09-05T23:50:23.546493560Z" level=info msg="containerd successfully booted in 0.099932s" Sep 5 23:50:23.546636 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:50:23.817028 tar[1463]: linux-arm64/README.md Sep 5 23:50:23.838745 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:50:23.877698 systemd-networkd[1376]: eth0: Gained IPv6LL Sep 5 23:50:23.879675 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 5 23:50:23.884951 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:50:23.888100 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:50:23.900704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:23.908118 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:50:23.966451 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:50:24.144218 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:50:24.166680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:50:24.179856 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:50:24.187669 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:50:24.188103 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:50:24.200094 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:50:24.213650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:50:24.222975 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:50:24.225995 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:50:24.227699 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:50:24.389801 systemd-networkd[1376]: eth1: Gained IPv6LL Sep 5 23:50:24.391980 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 5 23:50:24.933903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:24.934016 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:24.936896 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:50:24.940720 systemd[1]: Startup finished in 851ms (kernel) + 10.189s (initrd) + 5.094s (userspace) = 16.135s. Sep 5 23:50:25.596875 kubelet[1577]: E0905 23:50:25.596761 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:25.602621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:25.602940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:35.853532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:35.861055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:35.993607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:36.009070 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:36.064503 kubelet[1596]: E0905 23:50:36.063228 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:36.076291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:36.077153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:46.327434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:50:46.336868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:46.451614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:46.456842 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:46.511938 kubelet[1611]: E0905 23:50:46.511861 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:46.514992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:46.515302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:48.472808 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:50:48.481998 systemd[1]: Started sshd@0-91.98.45.119:22-139.178.68.195:56658.service - OpenSSH per-connection server daemon (139.178.68.195:56658). Sep 5 23:50:49.547099 sshd[1620]: Accepted publickey for core from 139.178.68.195 port 56658 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:49.549747 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:49.563392 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:50:49.576610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:50:49.589501 systemd-logind[1458]: New session 1 of user core. Sep 5 23:50:49.600979 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:50:49.612130 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:50:49.625066 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:50:49.742198 systemd[1624]: Queued start job for default target default.target. Sep 5 23:50:49.753511 systemd[1624]: Created slice app.slice - User Application Slice. Sep 5 23:50:49.753599 systemd[1624]: Reached target paths.target - Paths. Sep 5 23:50:49.753627 systemd[1624]: Reached target timers.target - Timers. Sep 5 23:50:49.756000 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:50:49.771931 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:50:49.772205 systemd[1624]: Reached target sockets.target - Sockets. Sep 5 23:50:49.772243 systemd[1624]: Reached target basic.target - Basic System. Sep 5 23:50:49.772347 systemd[1624]: Reached target default.target - Main User Target. Sep 5 23:50:49.772411 systemd[1624]: Startup finished in 137ms. Sep 5 23:50:49.772418 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:50:49.779910 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:50:50.520164 systemd[1]: Started sshd@1-91.98.45.119:22-139.178.68.195:49722.service - OpenSSH per-connection server daemon (139.178.68.195:49722). Sep 5 23:50:51.513711 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 49722 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:51.516898 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:51.525614 systemd-logind[1458]: New session 2 of user core. Sep 5 23:50:51.528945 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:50:52.207415 sshd[1635]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:52.211714 systemd[1]: sshd@1-91.98.45.119:22-139.178.68.195:49722.service: Deactivated successfully. Sep 5 23:50:52.213507 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:50:52.214992 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:50:52.216392 systemd-logind[1458]: Removed session 2. Sep 5 23:50:52.383952 systemd[1]: Started sshd@2-91.98.45.119:22-139.178.68.195:49732.service - OpenSSH per-connection server daemon (139.178.68.195:49732). Sep 5 23:50:53.400706 sshd[1642]: Accepted publickey for core from 139.178.68.195 port 49732 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:53.403626 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:53.409083 systemd-logind[1458]: New session 3 of user core. Sep 5 23:50:53.424822 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:50:54.089376 sshd[1642]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:54.093158 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:50:54.095105 systemd[1]: sshd@2-91.98.45.119:22-139.178.68.195:49732.service: Deactivated successfully. Sep 5 23:50:54.098389 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:50:54.099874 systemd-logind[1458]: Removed session 3. Sep 5 23:50:54.273021 systemd[1]: Started sshd@3-91.98.45.119:22-139.178.68.195:49742.service - OpenSSH per-connection server daemon (139.178.68.195:49742). Sep 5 23:50:54.676050 systemd-timesyncd[1348]: Contacted time server 185.255.121.15:123 (2.flatcar.pool.ntp.org). Sep 5 23:50:54.676173 systemd-timesyncd[1348]: Initial clock synchronization to Fri 2025-09-05 23:50:54.647258 UTC. Sep 5 23:50:55.268135 sshd[1649]: Accepted publickey for core from 139.178.68.195 port 49742 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:55.270209 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:55.275672 systemd-logind[1458]: New session 4 of user core. Sep 5 23:50:55.282919 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:50:55.959817 sshd[1649]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:55.963931 systemd[1]: sshd@3-91.98.45.119:22-139.178.68.195:49742.service: Deactivated successfully. Sep 5 23:50:55.965526 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:50:55.967718 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:50:55.969173 systemd-logind[1458]: Removed session 4. Sep 5 23:50:56.136985 systemd[1]: Started sshd@4-91.98.45.119:22-139.178.68.195:49756.service - OpenSSH per-connection server daemon (139.178.68.195:49756). Sep 5 23:50:56.766003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:50:56.776946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:56.907771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:56.925163 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:56.975966 kubelet[1665]: E0905 23:50:56.975892 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:56.979305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:56.979502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:57.120738 sshd[1656]: Accepted publickey for core from 139.178.68.195 port 49756 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:57.122907 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:57.128646 systemd-logind[1458]: New session 5 of user core. Sep 5 23:50:57.135949 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:50:57.663679 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:50:57.664039 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:57.681437 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:57.843293 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:57.848371 systemd[1]: sshd@4-91.98.45.119:22-139.178.68.195:49756.service: Deactivated successfully. Sep 5 23:50:57.850318 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:50:57.851093 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:50:57.852488 systemd-logind[1458]: Removed session 5. Sep 5 23:50:58.020119 systemd[1]: Started sshd@5-91.98.45.119:22-139.178.68.195:49758.service - OpenSSH per-connection server daemon (139.178.68.195:49758). Sep 5 23:50:59.015501 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 49758 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:59.018618 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:59.025377 systemd-logind[1458]: New session 6 of user core. Sep 5 23:50:59.032263 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:50:59.545465 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:50:59.545784 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.549794 sudo[1683]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:59.555509 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:50:59.555961 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.573920 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.577015 auditctl[1686]: No rules Sep 5 23:50:59.577344 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:50:59.577520 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.580932 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.620479 augenrules[1704]: No rules Sep 5 23:50:59.622648 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.624371 sudo[1682]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:59.786219 sshd[1679]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:59.791969 systemd[1]: sshd@5-91.98.45.119:22-139.178.68.195:49758.service: Deactivated successfully. Sep 5 23:50:59.793942 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:50:59.796741 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:50:59.798729 systemd-logind[1458]: Removed session 6. Sep 5 23:50:59.985060 systemd[1]: Started sshd@6-91.98.45.119:22-139.178.68.195:49772.service - OpenSSH per-connection server daemon (139.178.68.195:49772). Sep 5 23:51:01.032247 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 49772 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:01.034610 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:01.040469 systemd-logind[1458]: New session 7 of user core. Sep 5 23:51:01.045600 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:51:01.589164 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:51:01.592046 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:01.933993 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:51:01.934956 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:51:02.190738 dockerd[1730]: time="2025-09-05T23:51:02.188740836Z" level=info msg="Starting up" Sep 5 23:51:02.311190 dockerd[1730]: time="2025-09-05T23:51:02.311135615Z" level=info msg="Loading containers: start." Sep 5 23:51:02.427743 kernel: Initializing XFRM netlink socket Sep 5 23:51:02.516801 systemd-networkd[1376]: docker0: Link UP Sep 5 23:51:02.541102 dockerd[1730]: time="2025-09-05T23:51:02.540954167Z" level=info msg="Loading containers: done." Sep 5 23:51:02.558764 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2976602348-merged.mount: Deactivated successfully. Sep 5 23:51:02.562039 dockerd[1730]: time="2025-09-05T23:51:02.561965555Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:51:02.562177 dockerd[1730]: time="2025-09-05T23:51:02.562114067Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:51:02.562297 dockerd[1730]: time="2025-09-05T23:51:02.562260141Z" level=info msg="Daemon has completed initialization" Sep 5 23:51:02.606937 dockerd[1730]: time="2025-09-05T23:51:02.606739368Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:51:02.607701 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:51:03.763031 containerd[1476]: time="2025-09-05T23:51:03.762985595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 23:51:04.523464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188633225.mount: Deactivated successfully. Sep 5 23:51:06.074827 containerd[1476]: time="2025-09-05T23:51:06.073663382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.078348 containerd[1476]: time="2025-09-05T23:51:06.078295441Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328449" Sep 5 23:51:06.080353 containerd[1476]: time="2025-09-05T23:51:06.080297206Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.086086 containerd[1476]: time="2025-09-05T23:51:06.086015393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.087953 containerd[1476]: time="2025-09-05T23:51:06.087898782Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.324863558s" Sep 5 23:51:06.087953 containerd[1476]: time="2025-09-05T23:51:06.087948499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 5 23:51:06.088807 containerd[1476]: time="2025-09-05T23:51:06.088751755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 23:51:07.049097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:51:07.055941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:07.185282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:07.190785 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:07.237427 kubelet[1929]: E0905 23:51:07.237313 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:07.239774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:07.239921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:07.799766 containerd[1476]: time="2025-09-05T23:51:07.799696612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.802936 containerd[1476]: time="2025-09-05T23:51:07.801641054Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528572" Sep 5 23:51:07.802936 containerd[1476]: time="2025-09-05T23:51:07.802847662Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.808572 containerd[1476]: time="2025-09-05T23:51:07.807771136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.809760 containerd[1476]: time="2025-09-05T23:51:07.809036376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.720239141s" Sep 5 23:51:07.809760 containerd[1476]: time="2025-09-05T23:51:07.809080859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 5 23:51:07.810332 containerd[1476]: time="2025-09-05T23:51:07.810303335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 23:51:08.807874 update_engine[1459]: I20250905 23:51:08.806643 1459 update_attempter.cc:509] Updating boot flags... Sep 5 23:51:08.868697 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1949) Sep 5 23:51:09.233061 containerd[1476]: time="2025-09-05T23:51:09.232713373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.234348 containerd[1476]: time="2025-09-05T23:51:09.234191425Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483547" Sep 5 23:51:09.236593 containerd[1476]: time="2025-09-05T23:51:09.236491604Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.240091 containerd[1476]: time="2025-09-05T23:51:09.240021015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.243609 containerd[1476]: time="2025-09-05T23:51:09.241796013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.431377012s" Sep 5 23:51:09.243609 containerd[1476]: time="2025-09-05T23:51:09.241850893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 5 23:51:09.243815 containerd[1476]: time="2025-09-05T23:51:09.243638082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 23:51:10.232956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount908443394.mount: Deactivated successfully. Sep 5 23:51:10.833210 containerd[1476]: time="2025-09-05T23:51:10.832796585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.835056 containerd[1476]: time="2025-09-05T23:51:10.834502829Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376750" Sep 5 23:51:10.838409 containerd[1476]: time="2025-09-05T23:51:10.838339831Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.843120 containerd[1476]: time="2025-09-05T23:51:10.842710552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.843828 containerd[1476]: time="2025-09-05T23:51:10.843783745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.60010709s" Sep 5 23:51:10.843915 containerd[1476]: time="2025-09-05T23:51:10.843830194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 5 23:51:10.844495 containerd[1476]: time="2025-09-05T23:51:10.844460407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:51:11.458772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418801019.mount: Deactivated successfully. Sep 5 23:51:12.193683 containerd[1476]: time="2025-09-05T23:51:12.192753626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.196188 containerd[1476]: time="2025-09-05T23:51:12.195583222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 5 23:51:12.197857 containerd[1476]: time="2025-09-05T23:51:12.197253108Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.201753 containerd[1476]: time="2025-09-05T23:51:12.201701901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.203276 containerd[1476]: time="2025-09-05T23:51:12.203227274Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.357911564s" Sep 5 23:51:12.203437 containerd[1476]: time="2025-09-05T23:51:12.203395733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:51:12.204031 containerd[1476]: time="2025-09-05T23:51:12.204008888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:51:12.717621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891888367.mount: Deactivated successfully. Sep 5 23:51:12.724741 containerd[1476]: time="2025-09-05T23:51:12.724678708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.728498 containerd[1476]: time="2025-09-05T23:51:12.728395657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 5 23:51:12.730415 containerd[1476]: time="2025-09-05T23:51:12.730332184Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.733532 containerd[1476]: time="2025-09-05T23:51:12.733103216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.734169 containerd[1476]: time="2025-09-05T23:51:12.734129605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 529.999988ms" Sep 5 23:51:12.734235 containerd[1476]: time="2025-09-05T23:51:12.734170221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:51:12.734657 containerd[1476]: time="2025-09-05T23:51:12.734628148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 23:51:13.391230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067357978.mount: Deactivated successfully. Sep 5 23:51:15.738682 containerd[1476]: time="2025-09-05T23:51:15.738624971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.740201 containerd[1476]: time="2025-09-05T23:51:15.740109723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Sep 5 23:51:15.741771 containerd[1476]: time="2025-09-05T23:51:15.741299779Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.747700 containerd[1476]: time="2025-09-05T23:51:15.747642670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.750464 containerd[1476]: time="2025-09-05T23:51:15.750390203Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.015723076s" Sep 5 23:51:15.750464 containerd[1476]: time="2025-09-05T23:51:15.750458369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 5 23:51:17.299103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:51:17.309758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:17.433744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:17.445495 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:17.499515 kubelet[2106]: E0905 23:51:17.499387 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:17.503159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:17.503578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:20.812743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:20.820161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:20.856151 systemd[1]: Reloading requested from client PID 2120 ('systemctl') (unit session-7.scope)... Sep 5 23:51:20.856171 systemd[1]: Reloading... Sep 5 23:51:20.968670 zram_generator::config[2160]: No configuration found. Sep 5 23:51:21.072013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:21.141456 systemd[1]: Reloading finished in 284 ms. Sep 5 23:51:21.210208 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:21.212184 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:21.212420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.220829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:21.365867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.366021 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:21.417886 kubelet[2210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:21.417886 kubelet[2210]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:21.417886 kubelet[2210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:21.417886 kubelet[2210]: I0905 23:51:21.417501 2210 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:22.157241 kubelet[2210]: I0905 23:51:22.157181 2210 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 23:51:22.157241 kubelet[2210]: I0905 23:51:22.157228 2210 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:22.157718 kubelet[2210]: I0905 23:51:22.157684 2210 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 23:51:22.187732 kubelet[2210]: E0905 23:51:22.187683 2210 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.98.45.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.189419 kubelet[2210]: I0905 23:51:22.189229 2210 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:22.198052 kubelet[2210]: E0905 23:51:22.198009 2210 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:22.198052 kubelet[2210]: I0905 23:51:22.198046 2210 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:22.200731 kubelet[2210]: I0905 23:51:22.200707 2210 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:22.201704 kubelet[2210]: I0905 23:51:22.201638 2210 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:22.201910 kubelet[2210]: I0905 23:51:22.201707 2210 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-2b989ca6ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:22.202008 kubelet[2210]: I0905 23:51:22.201971 2210 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:22.202008 kubelet[2210]: I0905 23:51:22.201981 2210 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 23:51:22.202211 kubelet[2210]: I0905 23:51:22.202182 2210 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:22.206970 kubelet[2210]: I0905 23:51:22.206746 2210 kubelet.go:446] "Attempting to sync node with API server" Sep 5 23:51:22.206970 kubelet[2210]: I0905 23:51:22.206780 2210 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:22.206970 kubelet[2210]: I0905 23:51:22.206805 2210 kubelet.go:352] "Adding apiserver pod source" Sep 5 23:51:22.206970 kubelet[2210]: I0905 23:51:22.206817 2210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:22.214481 kubelet[2210]: W0905 23:51:22.213363 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.98.45.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-2b989ca6ad&limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:22.214481 kubelet[2210]: E0905 23:51:22.213618 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.98.45.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-2b989ca6ad&limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.214481 kubelet[2210]: W0905 23:51:22.214022 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.98.45.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:22.214481 kubelet[2210]: E0905 23:51:22.214062 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.98.45.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.214883 kubelet[2210]: I0905 23:51:22.214866 2210 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:22.215743 kubelet[2210]: I0905 23:51:22.215723 2210 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:22.215942 kubelet[2210]: W0905 23:51:22.215929 2210 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:51:22.218245 kubelet[2210]: I0905 23:51:22.218218 2210 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:51:22.218518 kubelet[2210]: I0905 23:51:22.218506 2210 server.go:1287] "Started kubelet" Sep 5 23:51:22.223892 kubelet[2210]: I0905 23:51:22.223735 2210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:22.225340 kubelet[2210]: E0905 23:51:22.224610 2210 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.98.45.119:6443/api/v1/namespaces/default/events\": dial tcp 91.98.45.119:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-2b989ca6ad.186287f2f6af6db2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-2b989ca6ad,UID:ci-4081-3-5-n-2b989ca6ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-2b989ca6ad,},FirstTimestamp:2025-09-05 23:51:22.218442162 +0000 UTC m=+0.846174853,LastTimestamp:2025-09-05 23:51:22.218442162 +0000 UTC m=+0.846174853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-2b989ca6ad,}" Sep 5 23:51:22.226442 kubelet[2210]: I0905 23:51:22.226376 2210 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:22.227298 kubelet[2210]: I0905 23:51:22.227252 2210 server.go:479] "Adding debug handlers to kubelet server" Sep 5 23:51:22.229607 kubelet[2210]: I0905 23:51:22.228964 2210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:22.229607 kubelet[2210]: I0905 23:51:22.229084 2210 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:51:22.229607 kubelet[2210]: I0905 23:51:22.229258 2210 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:22.229607 kubelet[2210]: E0905 23:51:22.229388 2210 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" Sep 5 23:51:22.229607 kubelet[2210]: I0905 23:51:22.229502 2210 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:22.231205 kubelet[2210]: I0905 23:51:22.231180 2210 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:51:22.232009 kubelet[2210]: I0905 23:51:22.231990 2210 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:22.233384 kubelet[2210]: W0905 23:51:22.233343 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.98.45.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:22.233732 kubelet[2210]: E0905 23:51:22.233583 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.98.45.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.233732 kubelet[2210]: E0905 23:51:22.233677 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.45.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-2b989ca6ad?timeout=10s\": dial tcp 91.98.45.119:6443: connect: connection refused" interval="200ms" Sep 5 23:51:22.234034 kubelet[2210]: I0905 23:51:22.234015 2210 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:22.234177 kubelet[2210]: I0905 23:51:22.234159 2210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:22.238737 kubelet[2210]: I0905 23:51:22.238702 2210 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:22.255208 kubelet[2210]: I0905 23:51:22.254976 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:22.256390 kubelet[2210]: I0905 23:51:22.256363 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:22.256515 kubelet[2210]: I0905 23:51:22.256503 2210 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 23:51:22.256642 kubelet[2210]: I0905 23:51:22.256628 2210 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:51:22.256767 kubelet[2210]: I0905 23:51:22.256754 2210 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 23:51:22.256898 kubelet[2210]: E0905 23:51:22.256872 2210 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:22.266972 kubelet[2210]: E0905 23:51:22.265977 2210 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:22.267870 kubelet[2210]: W0905 23:51:22.267489 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.98.45.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:22.267870 kubelet[2210]: E0905 23:51:22.267614 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.98.45.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.271681 kubelet[2210]: I0905 23:51:22.271652 2210 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:51:22.271826 kubelet[2210]: I0905 23:51:22.271815 2210 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:22.271971 kubelet[2210]: I0905 23:51:22.271962 2210 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:22.274847 kubelet[2210]: I0905 23:51:22.274811 2210 policy_none.go:49] "None policy: Start" Sep 5 23:51:22.275044 kubelet[2210]: I0905 23:51:22.275029 2210 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:51:22.275138 kubelet[2210]: I0905 23:51:22.275126 2210 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:22.282946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:51:22.296744 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:51:22.311116 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:51:22.315189 kubelet[2210]: I0905 23:51:22.313145 2210 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:22.315189 kubelet[2210]: I0905 23:51:22.313384 2210 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:22.315189 kubelet[2210]: I0905 23:51:22.313409 2210 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:22.315189 kubelet[2210]: I0905 23:51:22.313859 2210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:22.315726 kubelet[2210]: E0905 23:51:22.315707 2210 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:51:22.315837 kubelet[2210]: E0905 23:51:22.315826 2210 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-2b989ca6ad\" not found" Sep 5 23:51:22.370229 systemd[1]: Created slice kubepods-burstable-podceb9ea542bfb62c399dd48be82646d46.slice - libcontainer container kubepods-burstable-podceb9ea542bfb62c399dd48be82646d46.slice. Sep 5 23:51:22.400912 kubelet[2210]: E0905 23:51:22.400844 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.405729 systemd[1]: Created slice kubepods-burstable-pod698eae738baa53f36aa4895190899c6b.slice - libcontainer container kubepods-burstable-pod698eae738baa53f36aa4895190899c6b.slice. Sep 5 23:51:22.419836 kubelet[2210]: I0905 23:51:22.417871 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.419836 kubelet[2210]: E0905 23:51:22.418246 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.419836 kubelet[2210]: E0905 23:51:22.418634 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.45.119:6443/api/v1/nodes\": dial tcp 91.98.45.119:6443: connect: connection refused" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.424929 systemd[1]: Created slice kubepods-burstable-pod4e0a300e3fe01a3c5a9cbcf5c8f61581.slice - libcontainer container kubepods-burstable-pod4e0a300e3fe01a3c5a9cbcf5c8f61581.slice. Sep 5 23:51:22.427057 kubelet[2210]: E0905 23:51:22.427027 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433558 kubelet[2210]: I0905 23:51:22.433184 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433558 kubelet[2210]: I0905 23:51:22.433242 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433558 kubelet[2210]: I0905 23:51:22.433283 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433558 kubelet[2210]: I0905 23:51:22.433307 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433558 kubelet[2210]: I0905 23:51:22.433332 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433848 kubelet[2210]: I0905 23:51:22.433374 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433848 kubelet[2210]: I0905 23:51:22.433405 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433848 kubelet[2210]: I0905 23:51:22.433427 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e0a300e3fe01a3c5a9cbcf5c8f61581-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-2b989ca6ad\" (UID: \"4e0a300e3fe01a3c5a9cbcf5c8f61581\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.433848 kubelet[2210]: I0905 23:51:22.433449 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.434253 kubelet[2210]: E0905 23:51:22.434214 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.45.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-2b989ca6ad?timeout=10s\": dial tcp 91.98.45.119:6443: connect: connection refused" interval="400ms" Sep 5 23:51:22.621661 kubelet[2210]: I0905 23:51:22.621588 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.622324 kubelet[2210]: E0905 23:51:22.622194 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.45.119:6443/api/v1/nodes\": dial tcp 91.98.45.119:6443: connect: connection refused" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:22.703600 containerd[1476]: time="2025-09-05T23:51:22.702954920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-2b989ca6ad,Uid:ceb9ea542bfb62c399dd48be82646d46,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:22.721505 containerd[1476]: time="2025-09-05T23:51:22.721365896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-2b989ca6ad,Uid:698eae738baa53f36aa4895190899c6b,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:22.733200 containerd[1476]: time="2025-09-05T23:51:22.732779535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-2b989ca6ad,Uid:4e0a300e3fe01a3c5a9cbcf5c8f61581,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:22.834995 kubelet[2210]: E0905 23:51:22.834952 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.45.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-2b989ca6ad?timeout=10s\": dial tcp 91.98.45.119:6443: connect: connection refused" interval="800ms" Sep 5 23:51:23.025136 kubelet[2210]: I0905 23:51:23.024868 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:23.026648 kubelet[2210]: E0905 23:51:23.026586 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.45.119:6443/api/v1/nodes\": dial tcp 91.98.45.119:6443: connect: connection refused" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:23.087532 kubelet[2210]: W0905 23:51:23.087434 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.98.45.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:23.087532 kubelet[2210]: E0905 23:51:23.087532 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.98.45.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:23.214983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742300519.mount: Deactivated successfully. Sep 5 23:51:23.225591 containerd[1476]: time="2025-09-05T23:51:23.224693115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.225782 containerd[1476]: time="2025-09-05T23:51:23.225748167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.226785 containerd[1476]: time="2025-09-05T23:51:23.226755072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:23.227629 containerd[1476]: time="2025-09-05T23:51:23.227596746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:23.228579 containerd[1476]: time="2025-09-05T23:51:23.228397752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.229826 containerd[1476]: time="2025-09-05T23:51:23.229793343Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.230260 containerd[1476]: time="2025-09-05T23:51:23.230204543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 5 23:51:23.234654 containerd[1476]: time="2025-09-05T23:51:23.234591340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.236712 containerd[1476]: time="2025-09-05T23:51:23.236418366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.212679ms" Sep 5 23:51:23.241200 containerd[1476]: time="2025-09-05T23:51:23.241155260Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.054786ms" Sep 5 23:51:23.247478 containerd[1476]: time="2025-09-05T23:51:23.247137191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.277768ms" Sep 5 23:51:23.283728 kubelet[2210]: W0905 23:51:23.283603 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.98.45.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-2b989ca6ad&limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:23.283728 kubelet[2210]: E0905 23:51:23.283665 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.98.45.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-2b989ca6ad&limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:23.382909 containerd[1476]: time="2025-09-05T23:51:23.382591813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:23.382909 containerd[1476]: time="2025-09-05T23:51:23.382721935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:23.382909 containerd[1476]: time="2025-09-05T23:51:23.382757604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.383348 containerd[1476]: time="2025-09-05T23:51:23.383272653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.390743 containerd[1476]: time="2025-09-05T23:51:23.390626903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:23.390743 containerd[1476]: time="2025-09-05T23:51:23.390689444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:23.390743 containerd[1476]: time="2025-09-05T23:51:23.390700641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.391024 containerd[1476]: time="2025-09-05T23:51:23.390784456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.393611 containerd[1476]: time="2025-09-05T23:51:23.393480588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:23.393736 containerd[1476]: time="2025-09-05T23:51:23.393593955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:23.393922 containerd[1476]: time="2025-09-05T23:51:23.393808732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.394104 containerd[1476]: time="2025-09-05T23:51:23.393997637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:23.424746 systemd[1]: Started cri-containerd-341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca.scope - libcontainer container 341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca. Sep 5 23:51:23.425986 systemd[1]: Started cri-containerd-9203245b0f92e644e7145981182c4ea413f0066f586edda0c598e585b05bcc32.scope - libcontainer container 9203245b0f92e644e7145981182c4ea413f0066f586edda0c598e585b05bcc32. Sep 5 23:51:23.428880 systemd[1]: Started cri-containerd-c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618.scope - libcontainer container c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618. Sep 5 23:51:23.471611 kubelet[2210]: W0905 23:51:23.471389 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.98.45.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:23.471611 kubelet[2210]: E0905 23:51:23.471469 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.98.45.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:23.479556 containerd[1476]: time="2025-09-05T23:51:23.479008532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-2b989ca6ad,Uid:4e0a300e3fe01a3c5a9cbcf5c8f61581,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618\"" Sep 5 23:51:23.487371 containerd[1476]: time="2025-09-05T23:51:23.487314863Z" level=info msg="CreateContainer within sandbox \"c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:51:23.492099 kubelet[2210]: W0905 23:51:23.492007 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.98.45.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.98.45.119:6443: connect: connection refused Sep 5 23:51:23.492099 kubelet[2210]: E0905 23:51:23.492070 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.98.45.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.98.45.119:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:23.496768 containerd[1476]: time="2025-09-05T23:51:23.496339383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-2b989ca6ad,Uid:ceb9ea542bfb62c399dd48be82646d46,Namespace:kube-system,Attempt:0,} returns sandbox id \"9203245b0f92e644e7145981182c4ea413f0066f586edda0c598e585b05bcc32\"" Sep 5 23:51:23.502674 containerd[1476]: time="2025-09-05T23:51:23.501706734Z" level=info msg="CreateContainer within sandbox \"9203245b0f92e644e7145981182c4ea413f0066f586edda0c598e585b05bcc32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:51:23.503704 containerd[1476]: time="2025-09-05T23:51:23.503654284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-2b989ca6ad,Uid:698eae738baa53f36aa4895190899c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca\"" Sep 5 23:51:23.508034 containerd[1476]: time="2025-09-05T23:51:23.507983738Z" level=info msg="CreateContainer within sandbox \"341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:51:23.512195 containerd[1476]: time="2025-09-05T23:51:23.512117569Z" level=info msg="CreateContainer within sandbox \"c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224\"" Sep 5 23:51:23.513774 containerd[1476]: time="2025-09-05T23:51:23.513712462Z" level=info msg="StartContainer for \"4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224\"" Sep 5 23:51:23.524362 containerd[1476]: time="2025-09-05T23:51:23.524314841Z" level=info msg="CreateContainer within sandbox \"9203245b0f92e644e7145981182c4ea413f0066f586edda0c598e585b05bcc32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4172ce34d4d2f18662c57c76fb4b2b51d85a9309a40bf09dfbd082999796be0c\"" Sep 5 23:51:23.525361 containerd[1476]: time="2025-09-05T23:51:23.525270882Z" level=info msg="StartContainer for \"4172ce34d4d2f18662c57c76fb4b2b51d85a9309a40bf09dfbd082999796be0c\"" Sep 5 23:51:23.532301 containerd[1476]: time="2025-09-05T23:51:23.532230926Z" level=info msg="CreateContainer within sandbox \"341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275\"" Sep 5 23:51:23.532911 containerd[1476]: time="2025-09-05T23:51:23.532775886Z" level=info msg="StartContainer for \"a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275\"" Sep 5 23:51:23.543771 systemd[1]: Started cri-containerd-4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224.scope - libcontainer container 4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224. Sep 5 23:51:23.565791 systemd[1]: Started cri-containerd-4172ce34d4d2f18662c57c76fb4b2b51d85a9309a40bf09dfbd082999796be0c.scope - libcontainer container 4172ce34d4d2f18662c57c76fb4b2b51d85a9309a40bf09dfbd082999796be0c. Sep 5 23:51:23.584650 systemd[1]: Started cri-containerd-a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275.scope - libcontainer container a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275. Sep 5 23:51:23.613912 containerd[1476]: time="2025-09-05T23:51:23.612828792Z" level=info msg="StartContainer for \"4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224\" returns successfully" Sep 5 23:51:23.635725 kubelet[2210]: E0905 23:51:23.635684 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.45.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-2b989ca6ad?timeout=10s\": dial tcp 91.98.45.119:6443: connect: connection refused" interval="1.6s" Sep 5 23:51:23.643337 containerd[1476]: time="2025-09-05T23:51:23.642899277Z" level=info msg="StartContainer for \"4172ce34d4d2f18662c57c76fb4b2b51d85a9309a40bf09dfbd082999796be0c\" returns successfully" Sep 5 23:51:23.671323 containerd[1476]: time="2025-09-05T23:51:23.671280536Z" level=info msg="StartContainer for \"a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275\" returns successfully" Sep 5 23:51:23.829239 kubelet[2210]: I0905 23:51:23.829204 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:24.280246 kubelet[2210]: E0905 23:51:24.279647 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:24.284343 kubelet[2210]: E0905 23:51:24.283885 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:24.288378 kubelet[2210]: E0905 23:51:24.288142 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:25.287968 kubelet[2210]: E0905 23:51:25.287936 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:25.288970 kubelet[2210]: E0905 23:51:25.288950 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.214859 kubelet[2210]: I0905 23:51:26.214811 2210 apiserver.go:52] "Watching apiserver" Sep 5 23:51:26.291356 kubelet[2210]: E0905 23:51:26.291287 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.296862 kubelet[2210]: E0905 23:51:26.296817 2210 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-2b989ca6ad\" not found" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.332803 kubelet[2210]: I0905 23:51:26.332750 2210 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:51:26.342876 kubelet[2210]: E0905 23:51:26.342728 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-2b989ca6ad.186287f2f6af6db2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-2b989ca6ad,UID:ci-4081-3-5-n-2b989ca6ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-2b989ca6ad,},FirstTimestamp:2025-09-05 23:51:22.218442162 +0000 UTC m=+0.846174853,LastTimestamp:2025-09-05 23:51:22.218442162 +0000 UTC m=+0.846174853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-2b989ca6ad,}" Sep 5 23:51:26.393179 kubelet[2210]: I0905 23:51:26.392737 2210 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.397930 kubelet[2210]: E0905 23:51:26.397838 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-2b989ca6ad.186287f2f9847929 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-2b989ca6ad,UID:ci-4081-3-5-n-2b989ca6ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-2b989ca6ad,},FirstTimestamp:2025-09-05 23:51:22.265958697 +0000 UTC m=+0.893691349,LastTimestamp:2025-09-05 23:51:22.265958697 +0000 UTC m=+0.893691349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-2b989ca6ad,}" Sep 5 23:51:26.431851 kubelet[2210]: I0905 23:51:26.431801 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.441661 kubelet[2210]: E0905 23:51:26.441620 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.441661 kubelet[2210]: I0905 23:51:26.441658 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.443776 kubelet[2210]: E0905 23:51:26.443690 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.443776 kubelet[2210]: I0905 23:51:26.443752 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:26.445915 kubelet[2210]: E0905 23:51:26.445764 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-2b989ca6ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:28.340155 systemd[1]: Reloading requested from client PID 2481 ('systemctl') (unit session-7.scope)... Sep 5 23:51:28.340177 systemd[1]: Reloading... Sep 5 23:51:28.494575 zram_generator::config[2524]: No configuration found. Sep 5 23:51:28.604654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:28.695410 systemd[1]: Reloading finished in 354 ms. Sep 5 23:51:28.741448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:28.756042 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:28.757000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:28.757436 systemd[1]: kubelet.service: Consumed 1.296s CPU time, 130.6M memory peak, 0B memory swap peak. Sep 5 23:51:28.767101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:28.898804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:28.900999 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:28.966575 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:28.966575 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:28.966575 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:28.966575 kubelet[2566]: I0905 23:51:28.964803 2566 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:28.975343 kubelet[2566]: I0905 23:51:28.975289 2566 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 23:51:28.975343 kubelet[2566]: I0905 23:51:28.975324 2566 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:28.975704 kubelet[2566]: I0905 23:51:28.975687 2566 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 23:51:28.977825 kubelet[2566]: I0905 23:51:28.977793 2566 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:51:28.982240 kubelet[2566]: I0905 23:51:28.981822 2566 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:28.990429 kubelet[2566]: E0905 23:51:28.990366 2566 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:28.990849 kubelet[2566]: I0905 23:51:28.990805 2566 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:28.993910 kubelet[2566]: I0905 23:51:28.993866 2566 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:28.994350 kubelet[2566]: I0905 23:51:28.994299 2566 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:28.994760 kubelet[2566]: I0905 23:51:28.994360 2566 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-2b989ca6ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:28.994867 kubelet[2566]: I0905 23:51:28.994771 2566 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:28.994867 kubelet[2566]: I0905 23:51:28.994795 2566 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 23:51:28.994918 kubelet[2566]: I0905 23:51:28.994880 2566 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:28.995169 kubelet[2566]: I0905 23:51:28.995150 2566 kubelet.go:446] "Attempting to sync node with API server" Sep 5 23:51:28.995209 kubelet[2566]: I0905 23:51:28.995188 2566 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:28.995354 kubelet[2566]: I0905 23:51:28.995234 2566 kubelet.go:352] "Adding apiserver pod source" Sep 5 23:51:28.995354 kubelet[2566]: I0905 23:51:28.995267 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:29.000176 kubelet[2566]: I0905 23:51:28.999994 2566 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:29.001908 kubelet[2566]: I0905 23:51:29.001059 2566 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:29.004639 kubelet[2566]: I0905 23:51:29.003743 2566 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:51:29.004639 kubelet[2566]: I0905 23:51:29.003792 2566 server.go:1287] "Started kubelet" Sep 5 23:51:29.008552 kubelet[2566]: I0905 23:51:29.007281 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:29.012650 kubelet[2566]: E0905 23:51:29.012605 2566 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:29.014434 kubelet[2566]: I0905 23:51:29.014390 2566 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:29.017084 kubelet[2566]: I0905 23:51:29.016699 2566 server.go:479] "Adding debug handlers to kubelet server" Sep 5 23:51:29.019911 kubelet[2566]: I0905 23:51:29.019412 2566 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:51:29.020281 kubelet[2566]: I0905 23:51:29.020140 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:29.020399 kubelet[2566]: I0905 23:51:29.020375 2566 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:29.020967 kubelet[2566]: E0905 23:51:29.020684 2566 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-2b989ca6ad\" not found" Sep 5 23:51:29.022203 kubelet[2566]: I0905 23:51:29.021041 2566 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:51:29.025347 kubelet[2566]: I0905 23:51:29.021253 2566 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:29.026022 kubelet[2566]: I0905 23:51:29.022049 2566 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:29.038134 kubelet[2566]: I0905 23:51:29.037144 2566 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:29.038134 kubelet[2566]: I0905 23:51:29.037242 2566 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:29.042971 kubelet[2566]: I0905 23:51:29.042902 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:29.045656 kubelet[2566]: I0905 23:51:29.045624 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:29.045939 kubelet[2566]: I0905 23:51:29.045910 2566 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 23:51:29.046015 kubelet[2566]: I0905 23:51:29.046004 2566 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:51:29.046921 kubelet[2566]: I0905 23:51:29.046066 2566 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 23:51:29.046921 kubelet[2566]: E0905 23:51:29.046124 2566 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:29.046921 kubelet[2566]: I0905 23:51:29.046716 2566 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:29.101547 kubelet[2566]: I0905 23:51:29.101501 2566 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:51:29.101547 kubelet[2566]: I0905 23:51:29.101519 2566 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:29.101894 kubelet[2566]: I0905 23:51:29.101666 2566 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:29.101894 kubelet[2566]: I0905 23:51:29.101882 2566 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:51:29.102001 kubelet[2566]: I0905 23:51:29.101893 2566 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:51:29.102001 kubelet[2566]: I0905 23:51:29.101910 2566 policy_none.go:49] "None policy: Start" Sep 5 23:51:29.102001 kubelet[2566]: I0905 23:51:29.101919 2566 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:51:29.102001 kubelet[2566]: I0905 23:51:29.101927 2566 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:29.102171 kubelet[2566]: I0905 23:51:29.102016 2566 state_mem.go:75] "Updated machine memory state" Sep 5 23:51:29.107399 kubelet[2566]: I0905 23:51:29.107360 2566 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:29.107591 kubelet[2566]: I0905 23:51:29.107570 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:29.107591 kubelet[2566]: I0905 23:51:29.107591 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:29.109825 kubelet[2566]: I0905 23:51:29.109671 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:29.114473 kubelet[2566]: E0905 23:51:29.112000 2566 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:51:29.146967 kubelet[2566]: I0905 23:51:29.146880 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.148316 kubelet[2566]: I0905 23:51:29.147809 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.148529 kubelet[2566]: I0905 23:51:29.148472 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.211759 kubelet[2566]: I0905 23:51:29.211634 2566 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.224785 kubelet[2566]: I0905 23:51:29.224328 2566 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.224785 kubelet[2566]: I0905 23:51:29.224421 2566 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.230717 kubelet[2566]: I0905 23:51:29.230587 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.230717 kubelet[2566]: I0905 23:51:29.230644 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.230717 kubelet[2566]: I0905 23:51:29.230669 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e0a300e3fe01a3c5a9cbcf5c8f61581-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-2b989ca6ad\" (UID: \"4e0a300e3fe01a3c5a9cbcf5c8f61581\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.230717 kubelet[2566]: I0905 23:51:29.230688 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.231104 kubelet[2566]: I0905 23:51:29.230947 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.231104 kubelet[2566]: I0905 23:51:29.230987 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.231104 kubelet[2566]: I0905 23:51:29.231024 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698eae738baa53f36aa4895190899c6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-2b989ca6ad\" (UID: \"698eae738baa53f36aa4895190899c6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.231104 kubelet[2566]: I0905 23:51:29.231042 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.231104 kubelet[2566]: I0905 23:51:29.231064 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceb9ea542bfb62c399dd48be82646d46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-2b989ca6ad\" (UID: \"ceb9ea542bfb62c399dd48be82646d46\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:29.998493 kubelet[2566]: I0905 23:51:29.998444 2566 apiserver.go:52] "Watching apiserver" Sep 5 23:51:30.026164 kubelet[2566]: I0905 23:51:30.026116 2566 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:51:30.079156 kubelet[2566]: I0905 23:51:30.079028 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:30.088830 kubelet[2566]: E0905 23:51:30.088769 2566 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-2b989ca6ad\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" Sep 5 23:51:30.103686 kubelet[2566]: I0905 23:51:30.103459 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-2b989ca6ad" podStartSLOduration=1.103442856 podStartE2EDuration="1.103442856s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.103281406 +0000 UTC m=+1.196532800" watchObservedRunningTime="2025-09-05 23:51:30.103442856 +0000 UTC m=+1.196694250" Sep 5 23:51:30.103686 kubelet[2566]: I0905 23:51:30.103579 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-2b989ca6ad" podStartSLOduration=1.103573992 podStartE2EDuration="1.103573992s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.089765602 +0000 UTC m=+1.183017036" watchObservedRunningTime="2025-09-05 23:51:30.103573992 +0000 UTC m=+1.196825386" Sep 5 23:51:30.135085 kubelet[2566]: I0905 23:51:30.135001 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-2b989ca6ad" podStartSLOduration=1.134981665 podStartE2EDuration="1.134981665s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.1221197 +0000 UTC m=+1.215371094" watchObservedRunningTime="2025-09-05 23:51:30.134981665 +0000 UTC m=+1.228233059" Sep 5 23:51:32.888938 kubelet[2566]: I0905 23:51:32.888310 2566 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:51:32.889452 containerd[1476]: time="2025-09-05T23:51:32.888787016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:51:32.889841 kubelet[2566]: I0905 23:51:32.889053 2566 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:51:32.956233 kubelet[2566]: W0905 23:51:32.955929 2566 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-5-n-2b989ca6ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-2b989ca6ad' and this object Sep 5 23:51:32.956233 kubelet[2566]: I0905 23:51:32.955926 2566 status_manager.go:890] "Failed to get status for pod" podUID="04399ac2-1fd2-482e-abca-8a408f24bcce" pod="kube-system/kube-proxy-fkxjh" err="pods \"kube-proxy-fkxjh\" is forbidden: User \"system:node:ci-4081-3-5-n-2b989ca6ad\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-2b989ca6ad' and this object" Sep 5 23:51:32.956233 kubelet[2566]: E0905 23:51:32.955974 2566 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-5-n-2b989ca6ad\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-2b989ca6ad' and this object" logger="UnhandledError" Sep 5 23:51:32.956233 kubelet[2566]: W0905 23:51:32.956028 2566 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-5-n-2b989ca6ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-2b989ca6ad' and this object Sep 5 23:51:32.956233 kubelet[2566]: E0905 23:51:32.956050 2566 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-5-n-2b989ca6ad\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-2b989ca6ad' and this object" logger="UnhandledError" Sep 5 23:51:32.961596 systemd[1]: Created slice kubepods-besteffort-pod04399ac2_1fd2_482e_abca_8a408f24bcce.slice - libcontainer container kubepods-besteffort-pod04399ac2_1fd2_482e_abca_8a408f24bcce.slice. Sep 5 23:51:33.057633 kubelet[2566]: I0905 23:51:33.057458 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04399ac2-1fd2-482e-abca-8a408f24bcce-kube-proxy\") pod \"kube-proxy-fkxjh\" (UID: \"04399ac2-1fd2-482e-abca-8a408f24bcce\") " pod="kube-system/kube-proxy-fkxjh" Sep 5 23:51:33.057633 kubelet[2566]: I0905 23:51:33.057525 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04399ac2-1fd2-482e-abca-8a408f24bcce-lib-modules\") pod \"kube-proxy-fkxjh\" (UID: \"04399ac2-1fd2-482e-abca-8a408f24bcce\") " pod="kube-system/kube-proxy-fkxjh" Sep 5 23:51:33.057633 kubelet[2566]: I0905 23:51:33.057588 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04399ac2-1fd2-482e-abca-8a408f24bcce-xtables-lock\") pod \"kube-proxy-fkxjh\" (UID: \"04399ac2-1fd2-482e-abca-8a408f24bcce\") " pod="kube-system/kube-proxy-fkxjh" Sep 5 23:51:33.057633 kubelet[2566]: I0905 23:51:33.057629 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c852\" (UniqueName: \"kubernetes.io/projected/04399ac2-1fd2-482e-abca-8a408f24bcce-kube-api-access-4c852\") pod \"kube-proxy-fkxjh\" (UID: \"04399ac2-1fd2-482e-abca-8a408f24bcce\") " pod="kube-system/kube-proxy-fkxjh" Sep 5 23:51:34.102228 systemd[1]: Created slice kubepods-besteffort-pod0b92b414_aaac_4143_b468_07451eb03ab9.slice - libcontainer container kubepods-besteffort-pod0b92b414_aaac_4143_b468_07451eb03ab9.slice. Sep 5 23:51:34.164357 kubelet[2566]: I0905 23:51:34.164297 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftkn\" (UniqueName: \"kubernetes.io/projected/0b92b414-aaac-4143-b468-07451eb03ab9-kube-api-access-mftkn\") pod \"tigera-operator-755d956888-dlwj7\" (UID: \"0b92b414-aaac-4143-b468-07451eb03ab9\") " pod="tigera-operator/tigera-operator-755d956888-dlwj7" Sep 5 23:51:34.165018 kubelet[2566]: I0905 23:51:34.164477 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b92b414-aaac-4143-b468-07451eb03ab9-var-lib-calico\") pod \"tigera-operator-755d956888-dlwj7\" (UID: \"0b92b414-aaac-4143-b468-07451eb03ab9\") " pod="tigera-operator/tigera-operator-755d956888-dlwj7" Sep 5 23:51:34.171048 kubelet[2566]: E0905 23:51:34.170405 2566 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:51:34.171048 kubelet[2566]: E0905 23:51:34.170455 2566 projected.go:194] Error preparing data for projected volume kube-api-access-4c852 for pod kube-system/kube-proxy-fkxjh: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:51:34.171048 kubelet[2566]: E0905 23:51:34.170625 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04399ac2-1fd2-482e-abca-8a408f24bcce-kube-api-access-4c852 podName:04399ac2-1fd2-482e-abca-8a408f24bcce nodeName:}" failed. No retries permitted until 2025-09-05 23:51:34.670582479 +0000 UTC m=+5.763833913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4c852" (UniqueName: "kubernetes.io/projected/04399ac2-1fd2-482e-abca-8a408f24bcce-kube-api-access-4c852") pod "kube-proxy-fkxjh" (UID: "04399ac2-1fd2-482e-abca-8a408f24bcce") : failed to sync configmap cache: timed out waiting for the condition Sep 5 23:51:34.407490 containerd[1476]: time="2025-09-05T23:51:34.406826151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-dlwj7,Uid:0b92b414-aaac-4143-b468-07451eb03ab9,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:51:34.432704 containerd[1476]: time="2025-09-05T23:51:34.432316446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:34.432704 containerd[1476]: time="2025-09-05T23:51:34.432400633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:34.432704 containerd[1476]: time="2025-09-05T23:51:34.432418151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.432704 containerd[1476]: time="2025-09-05T23:51:34.432522176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.460767 systemd[1]: Started cri-containerd-e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294.scope - libcontainer container e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294. Sep 5 23:51:34.490838 containerd[1476]: time="2025-09-05T23:51:34.490787398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-dlwj7,Uid:0b92b414-aaac-4143-b468-07451eb03ab9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294\"" Sep 5 23:51:34.494261 containerd[1476]: time="2025-09-05T23:51:34.493595394Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:51:35.073579 containerd[1476]: time="2025-09-05T23:51:35.073407825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkxjh,Uid:04399ac2-1fd2-482e-abca-8a408f24bcce,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:35.102574 containerd[1476]: time="2025-09-05T23:51:35.102290612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:35.102574 containerd[1476]: time="2025-09-05T23:51:35.102346644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:35.102574 containerd[1476]: time="2025-09-05T23:51:35.102357563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:35.102574 containerd[1476]: time="2025-09-05T23:51:35.102444471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:35.120773 systemd[1]: Started cri-containerd-80ae4d310ffc5b090e9a1d8e1dadf110e2276809a990eca01694e3262b1edb9c.scope - libcontainer container 80ae4d310ffc5b090e9a1d8e1dadf110e2276809a990eca01694e3262b1edb9c. Sep 5 23:51:35.153847 containerd[1476]: time="2025-09-05T23:51:35.152973740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkxjh,Uid:04399ac2-1fd2-482e-abca-8a408f24bcce,Namespace:kube-system,Attempt:0,} returns sandbox id \"80ae4d310ffc5b090e9a1d8e1dadf110e2276809a990eca01694e3262b1edb9c\"" Sep 5 23:51:35.159424 containerd[1476]: time="2025-09-05T23:51:35.159376597Z" level=info msg="CreateContainer within sandbox \"80ae4d310ffc5b090e9a1d8e1dadf110e2276809a990eca01694e3262b1edb9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:51:35.176716 containerd[1476]: time="2025-09-05T23:51:35.176640549Z" level=info msg="CreateContainer within sandbox \"80ae4d310ffc5b090e9a1d8e1dadf110e2276809a990eca01694e3262b1edb9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d230c46cd6adeceb2ffa4b0553303167458df1bd184d550e8b21666ef02bf718\"" Sep 5 23:51:35.177672 containerd[1476]: time="2025-09-05T23:51:35.177273504Z" level=info msg="StartContainer for \"d230c46cd6adeceb2ffa4b0553303167458df1bd184d550e8b21666ef02bf718\"" Sep 5 23:51:35.207744 systemd[1]: Started cri-containerd-d230c46cd6adeceb2ffa4b0553303167458df1bd184d550e8b21666ef02bf718.scope - libcontainer container d230c46cd6adeceb2ffa4b0553303167458df1bd184d550e8b21666ef02bf718. Sep 5 23:51:35.246452 containerd[1476]: time="2025-09-05T23:51:35.246331595Z" level=info msg="StartContainer for \"d230c46cd6adeceb2ffa4b0553303167458df1bd184d550e8b21666ef02bf718\" returns successfully" Sep 5 23:51:36.112608 kubelet[2566]: I0905 23:51:36.111897 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fkxjh" podStartSLOduration=4.111871445 podStartE2EDuration="4.111871445s" podCreationTimestamp="2025-09-05 23:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:36.111622037 +0000 UTC m=+7.204873431" watchObservedRunningTime="2025-09-05 23:51:36.111871445 +0000 UTC m=+7.205122879" Sep 5 23:51:37.515555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251687505.mount: Deactivated successfully. Sep 5 23:51:45.355395 containerd[1476]: time="2025-09-05T23:51:45.355312825Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:45.357316 containerd[1476]: time="2025-09-05T23:51:45.357261047Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:51:45.358400 containerd[1476]: time="2025-09-05T23:51:45.358339611Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:45.361812 containerd[1476]: time="2025-09-05T23:51:45.361312401Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:45.362357 containerd[1476]: time="2025-09-05T23:51:45.362317649Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 10.868668662s" Sep 5 23:51:45.362357 containerd[1476]: time="2025-09-05T23:51:45.362354847Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:51:45.366220 containerd[1476]: time="2025-09-05T23:51:45.366154458Z" level=info msg="CreateContainer within sandbox \"e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:51:45.386184 containerd[1476]: time="2025-09-05T23:51:45.386122527Z" level=info msg="CreateContainer within sandbox \"e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3\"" Sep 5 23:51:45.387095 containerd[1476]: time="2025-09-05T23:51:45.387018103Z" level=info msg="StartContainer for \"f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3\"" Sep 5 23:51:45.421194 systemd[1]: run-containerd-runc-k8s.io-f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3-runc.d13E8g.mount: Deactivated successfully. Sep 5 23:51:45.428761 systemd[1]: Started cri-containerd-f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3.scope - libcontainer container f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3. Sep 5 23:51:45.461961 containerd[1476]: time="2025-09-05T23:51:45.461876331Z" level=info msg="StartContainer for \"f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3\" returns successfully" Sep 5 23:51:51.711202 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:51.882109 sshd[1712]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:51.889048 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:51:51.889687 systemd[1]: sshd@6-91.98.45.119:22-139.178.68.195:49772.service: Deactivated successfully. Sep 5 23:51:51.893472 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:51:51.894153 systemd[1]: session-7.scope: Consumed 6.934s CPU time, 154.1M memory peak, 0B memory swap peak. Sep 5 23:51:51.896094 systemd-logind[1458]: Removed session 7. Sep 5 23:51:58.791943 kubelet[2566]: I0905 23:51:58.791795 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-dlwj7" podStartSLOduration=13.921213697 podStartE2EDuration="24.791770202s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="2025-09-05 23:51:34.492767753 +0000 UTC m=+5.586019147" lastFinishedPulling="2025-09-05 23:51:45.363324258 +0000 UTC m=+16.456575652" observedRunningTime="2025-09-05 23:51:46.137935179 +0000 UTC m=+17.231186653" watchObservedRunningTime="2025-09-05 23:51:58.791770202 +0000 UTC m=+29.885021596" Sep 5 23:51:58.802788 systemd[1]: Created slice kubepods-besteffort-pod2b172cd3_8c88_4449_9b63_26ff059e9602.slice - libcontainer container kubepods-besteffort-pod2b172cd3_8c88_4449_9b63_26ff059e9602.slice. Sep 5 23:51:58.826059 kubelet[2566]: I0905 23:51:58.825768 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b172cd3-8c88-4449-9b63-26ff059e9602-typha-certs\") pod \"calico-typha-689bc7bc6f-tsk9z\" (UID: \"2b172cd3-8c88-4449-9b63-26ff059e9602\") " pod="calico-system/calico-typha-689bc7bc6f-tsk9z" Sep 5 23:51:58.826059 kubelet[2566]: I0905 23:51:58.825813 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b172cd3-8c88-4449-9b63-26ff059e9602-tigera-ca-bundle\") pod \"calico-typha-689bc7bc6f-tsk9z\" (UID: \"2b172cd3-8c88-4449-9b63-26ff059e9602\") " pod="calico-system/calico-typha-689bc7bc6f-tsk9z" Sep 5 23:51:58.826059 kubelet[2566]: I0905 23:51:58.825834 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqnn\" (UniqueName: \"kubernetes.io/projected/2b172cd3-8c88-4449-9b63-26ff059e9602-kube-api-access-xfqnn\") pod \"calico-typha-689bc7bc6f-tsk9z\" (UID: \"2b172cd3-8c88-4449-9b63-26ff059e9602\") " pod="calico-system/calico-typha-689bc7bc6f-tsk9z" Sep 5 23:51:58.973753 systemd[1]: Created slice kubepods-besteffort-pod0fb3d61f_a996_46d8_894c_129e1977ca94.slice - libcontainer container kubepods-besteffort-pod0fb3d61f_a996_46d8_894c_129e1977ca94.slice. Sep 5 23:51:59.027636 kubelet[2566]: I0905 23:51:59.027546 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-var-run-calico\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.028196 kubelet[2566]: I0905 23:51:59.028091 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb3d61f-a996-46d8-894c-129e1977ca94-tigera-ca-bundle\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.028196 kubelet[2566]: I0905 23:51:59.028139 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-xtables-lock\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.028196 kubelet[2566]: I0905 23:51:59.028158 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-cni-log-dir\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029500 kubelet[2566]: I0905 23:51:59.028180 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-cni-net-dir\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029500 kubelet[2566]: I0905 23:51:59.029448 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-policysync\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029731 kubelet[2566]: I0905 23:51:59.029466 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-var-lib-calico\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029731 kubelet[2566]: I0905 23:51:59.029583 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-cni-bin-dir\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029731 kubelet[2566]: I0905 23:51:59.029607 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-flexvol-driver-host\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029990 kubelet[2566]: I0905 23:51:59.029848 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fb3d61f-a996-46d8-894c-129e1977ca94-lib-modules\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029990 kubelet[2566]: I0905 23:51:59.029912 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0fb3d61f-a996-46d8-894c-129e1977ca94-node-certs\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.029990 kubelet[2566]: I0905 23:51:59.029933 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2x8k\" (UniqueName: \"kubernetes.io/projected/0fb3d61f-a996-46d8-894c-129e1977ca94-kube-api-access-n2x8k\") pod \"calico-node-l92rg\" (UID: \"0fb3d61f-a996-46d8-894c-129e1977ca94\") " pod="calico-system/calico-node-l92rg" Sep 5 23:51:59.109138 containerd[1476]: time="2025-09-05T23:51:59.109081782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-689bc7bc6f-tsk9z,Uid:2b172cd3-8c88-4449-9b63-26ff059e9602,Namespace:calico-system,Attempt:0,}" Sep 5 23:51:59.144083 kubelet[2566]: E0905 23:51:59.143204 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.144083 kubelet[2566]: W0905 23:51:59.143230 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.144083 kubelet[2566]: E0905 23:51:59.143255 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.145693 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.149141 kubelet[2566]: W0905 23:51:59.145721 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.145758 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.146069 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.149141 kubelet[2566]: W0905 23:51:59.146079 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.148584 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.148841 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.149141 kubelet[2566]: W0905 23:51:59.148851 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.148885 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.149141 kubelet[2566]: E0905 23:51:59.149073 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.149480 kubelet[2566]: W0905 23:51:59.149081 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.149480 kubelet[2566]: E0905 23:51:59.149113 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.149842 kubelet[2566]: E0905 23:51:59.149800 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.149842 kubelet[2566]: W0905 23:51:59.149814 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.149974 kubelet[2566]: E0905 23:51:59.149949 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.150221 kubelet[2566]: E0905 23:51:59.150208 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.150369 kubelet[2566]: W0905 23:51:59.150297 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.150433 kubelet[2566]: E0905 23:51:59.150420 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.151142 kubelet[2566]: E0905 23:51:59.151127 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.151419 kubelet[2566]: W0905 23:51:59.151267 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.151868 kubelet[2566]: E0905 23:51:59.151775 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.151868 kubelet[2566]: W0905 23:51:59.151798 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.152897 kubelet[2566]: E0905 23:51:59.152345 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.152897 kubelet[2566]: W0905 23:51:59.152359 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.152897 kubelet[2566]: E0905 23:51:59.152836 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.153192 kubelet[2566]: E0905 23:51:59.153072 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.153192 kubelet[2566]: W0905 23:51:59.153088 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.153192 kubelet[2566]: E0905 23:51:59.153121 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.153192 kubelet[2566]: E0905 23:51:59.153138 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.153192 kubelet[2566]: E0905 23:51:59.153160 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.156872 kubelet[2566]: E0905 23:51:59.156163 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.156872 kubelet[2566]: W0905 23:51:59.156336 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.156872 kubelet[2566]: E0905 23:51:59.156379 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.158660 kubelet[2566]: E0905 23:51:59.157278 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.158660 kubelet[2566]: W0905 23:51:59.158187 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.158660 kubelet[2566]: E0905 23:51:59.158242 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.158660 kubelet[2566]: E0905 23:51:59.158491 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.158660 kubelet[2566]: W0905 23:51:59.158504 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.158660 kubelet[2566]: E0905 23:51:59.158552 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.159127 kubelet[2566]: E0905 23:51:59.159099 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.159127 kubelet[2566]: W0905 23:51:59.159113 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.159777 kubelet[2566]: E0905 23:51:59.159293 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.161474 kubelet[2566]: E0905 23:51:59.161453 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.161805 kubelet[2566]: W0905 23:51:59.161524 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.161805 kubelet[2566]: E0905 23:51:59.161721 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.162425 kubelet[2566]: E0905 23:51:59.162338 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.162425 kubelet[2566]: W0905 23:51:59.162354 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.162425 kubelet[2566]: E0905 23:51:59.162368 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.170257 kubelet[2566]: E0905 23:51:59.170159 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.170257 kubelet[2566]: W0905 23:51:59.170180 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.170257 kubelet[2566]: E0905 23:51:59.170198 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.200878 containerd[1476]: time="2025-09-05T23:51:59.199766508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:59.200878 containerd[1476]: time="2025-09-05T23:51:59.199845388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:59.200878 containerd[1476]: time="2025-09-05T23:51:59.199871668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:59.200878 containerd[1476]: time="2025-09-05T23:51:59.200010989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:59.228404 kubelet[2566]: E0905 23:51:59.228090 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:51:59.235791 systemd[1]: Started cri-containerd-aaacdb5ed2de52617034ddd4feb486f6f08dffdc9c34c0e6f988a425f1b9d2bf.scope - libcontainer container aaacdb5ed2de52617034ddd4feb486f6f08dffdc9c34c0e6f988a425f1b9d2bf. Sep 5 23:51:59.281025 containerd[1476]: time="2025-09-05T23:51:59.280564047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l92rg,Uid:0fb3d61f-a996-46d8-894c-129e1977ca94,Namespace:calico-system,Attempt:0,}" Sep 5 23:51:59.317518 containerd[1476]: time="2025-09-05T23:51:59.317063705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:59.317518 containerd[1476]: time="2025-09-05T23:51:59.317127946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:59.317518 containerd[1476]: time="2025-09-05T23:51:59.317139906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:59.317518 containerd[1476]: time="2025-09-05T23:51:59.317254946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:59.320768 kubelet[2566]: E0905 23:51:59.320615 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.320768 kubelet[2566]: W0905 23:51:59.320651 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.320768 kubelet[2566]: E0905 23:51:59.320702 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.321320 kubelet[2566]: E0905 23:51:59.321144 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.321320 kubelet[2566]: W0905 23:51:59.321157 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.321320 kubelet[2566]: E0905 23:51:59.321204 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.322780 kubelet[2566]: E0905 23:51:59.321918 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.322987 kubelet[2566]: W0905 23:51:59.322851 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.322987 kubelet[2566]: E0905 23:51:59.322875 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.323672 kubelet[2566]: E0905 23:51:59.323501 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.323672 kubelet[2566]: W0905 23:51:59.323516 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.323672 kubelet[2566]: E0905 23:51:59.323529 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.326590 kubelet[2566]: E0905 23:51:59.325450 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.326590 kubelet[2566]: W0905 23:51:59.325492 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.326590 kubelet[2566]: E0905 23:51:59.325843 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.327080 kubelet[2566]: E0905 23:51:59.327065 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.327322 kubelet[2566]: W0905 23:51:59.327169 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.327322 kubelet[2566]: E0905 23:51:59.327188 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.328502 kubelet[2566]: E0905 23:51:59.328483 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.328761 kubelet[2566]: W0905 23:51:59.328594 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.328761 kubelet[2566]: E0905 23:51:59.328631 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.329834 kubelet[2566]: E0905 23:51:59.329595 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.329834 kubelet[2566]: W0905 23:51:59.329611 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.329834 kubelet[2566]: E0905 23:51:59.329626 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.331362 kubelet[2566]: E0905 23:51:59.331127 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.331362 kubelet[2566]: W0905 23:51:59.331149 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.331362 kubelet[2566]: E0905 23:51:59.331165 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.333029 kubelet[2566]: E0905 23:51:59.332863 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.333029 kubelet[2566]: W0905 23:51:59.332900 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.333029 kubelet[2566]: E0905 23:51:59.332915 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.335870 kubelet[2566]: E0905 23:51:59.335695 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.335870 kubelet[2566]: W0905 23:51:59.335721 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.335870 kubelet[2566]: E0905 23:51:59.335741 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.336269 kubelet[2566]: E0905 23:51:59.336039 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.336269 kubelet[2566]: W0905 23:51:59.336050 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.336269 kubelet[2566]: E0905 23:51:59.336061 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.336502 kubelet[2566]: E0905 23:51:59.336398 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.336502 kubelet[2566]: W0905 23:51:59.336409 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.336502 kubelet[2566]: E0905 23:51:59.336420 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.336806 kubelet[2566]: E0905 23:51:59.336616 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.336806 kubelet[2566]: W0905 23:51:59.336626 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.336806 kubelet[2566]: E0905 23:51:59.336685 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.337744 kubelet[2566]: E0905 23:51:59.337070 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.337744 kubelet[2566]: W0905 23:51:59.337083 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.337744 kubelet[2566]: E0905 23:51:59.337094 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.338754 kubelet[2566]: E0905 23:51:59.338676 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.338754 kubelet[2566]: W0905 23:51:59.338698 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.338754 kubelet[2566]: E0905 23:51:59.338715 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.339364 kubelet[2566]: E0905 23:51:59.338956 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.339364 kubelet[2566]: W0905 23:51:59.338965 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.339364 kubelet[2566]: E0905 23:51:59.338975 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.339599 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.340345 kubelet[2566]: W0905 23:51:59.339617 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.339650 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.340004 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.340345 kubelet[2566]: W0905 23:51:59.340016 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.340030 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.340274 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.340345 kubelet[2566]: W0905 23:51:59.340284 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.340345 kubelet[2566]: E0905 23:51:59.340292 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.341238 kubelet[2566]: E0905 23:51:59.340891 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.341238 kubelet[2566]: W0905 23:51:59.340911 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.341238 kubelet[2566]: E0905 23:51:59.340927 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.341238 kubelet[2566]: I0905 23:51:59.340954 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f45c648-2314-4d67-9e44-04067f334c76-kubelet-dir\") pod \"csi-node-driver-6tlkz\" (UID: \"7f45c648-2314-4d67-9e44-04067f334c76\") " pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:51:59.341623 kubelet[2566]: E0905 23:51:59.341409 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.341623 kubelet[2566]: W0905 23:51:59.341427 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.341623 kubelet[2566]: E0905 23:51:59.341445 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.341623 kubelet[2566]: I0905 23:51:59.341464 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f45c648-2314-4d67-9e44-04067f334c76-registration-dir\") pod \"csi-node-driver-6tlkz\" (UID: \"7f45c648-2314-4d67-9e44-04067f334c76\") " pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:51:59.342577 kubelet[2566]: E0905 23:51:59.341857 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.342577 kubelet[2566]: W0905 23:51:59.341874 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.342577 kubelet[2566]: E0905 23:51:59.341892 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.342577 kubelet[2566]: I0905 23:51:59.341909 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f45c648-2314-4d67-9e44-04067f334c76-socket-dir\") pod \"csi-node-driver-6tlkz\" (UID: \"7f45c648-2314-4d67-9e44-04067f334c76\") " pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:51:59.343165 kubelet[2566]: E0905 23:51:59.342958 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.343165 kubelet[2566]: W0905 23:51:59.342978 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.343165 kubelet[2566]: E0905 23:51:59.343000 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.343165 kubelet[2566]: I0905 23:51:59.343018 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f45c648-2314-4d67-9e44-04067f334c76-varrun\") pod \"csi-node-driver-6tlkz\" (UID: \"7f45c648-2314-4d67-9e44-04067f334c76\") " pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:51:59.344787 kubelet[2566]: E0905 23:51:59.343473 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.344787 kubelet[2566]: W0905 23:51:59.343489 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.344787 kubelet[2566]: E0905 23:51:59.343523 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.344787 kubelet[2566]: I0905 23:51:59.344649 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2bf\" (UniqueName: \"kubernetes.io/projected/7f45c648-2314-4d67-9e44-04067f334c76-kube-api-access-zz2bf\") pod \"csi-node-driver-6tlkz\" (UID: \"7f45c648-2314-4d67-9e44-04067f334c76\") " pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:51:59.344787 kubelet[2566]: E0905 23:51:59.343881 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.344787 kubelet[2566]: W0905 23:51:59.344706 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.344787 kubelet[2566]: E0905 23:51:59.344745 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.345887 kubelet[2566]: E0905 23:51:59.345732 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.345887 kubelet[2566]: W0905 23:51:59.345752 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.345887 kubelet[2566]: E0905 23:51:59.345798 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.347313 kubelet[2566]: E0905 23:51:59.347071 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.347313 kubelet[2566]: W0905 23:51:59.347111 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.347313 kubelet[2566]: E0905 23:51:59.347153 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.349131 kubelet[2566]: E0905 23:51:59.348920 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.349131 kubelet[2566]: W0905 23:51:59.348967 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.349131 kubelet[2566]: E0905 23:51:59.349003 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.349906 kubelet[2566]: E0905 23:51:59.349624 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.349906 kubelet[2566]: W0905 23:51:59.349694 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.349906 kubelet[2566]: E0905 23:51:59.349731 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.351306 kubelet[2566]: E0905 23:51:59.350454 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.351306 kubelet[2566]: W0905 23:51:59.350473 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.351306 kubelet[2566]: E0905 23:51:59.350488 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.353137 kubelet[2566]: E0905 23:51:59.352997 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.353137 kubelet[2566]: W0905 23:51:59.353021 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.353137 kubelet[2566]: E0905 23:51:59.353043 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.354262 kubelet[2566]: E0905 23:51:59.353704 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.354262 kubelet[2566]: W0905 23:51:59.353725 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.354262 kubelet[2566]: E0905 23:51:59.353741 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.355770 kubelet[2566]: E0905 23:51:59.354833 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.355770 kubelet[2566]: W0905 23:51:59.354851 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.355770 kubelet[2566]: E0905 23:51:59.354867 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.357473 kubelet[2566]: E0905 23:51:59.357448 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.357789 kubelet[2566]: W0905 23:51:59.357717 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.358295 kubelet[2566]: E0905 23:51:59.357768 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.366355 systemd[1]: Started cri-containerd-89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6.scope - libcontainer container 89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6. Sep 5 23:51:59.444200 containerd[1476]: time="2025-09-05T23:51:59.444084929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l92rg,Uid:0fb3d61f-a996-46d8-894c-129e1977ca94,Namespace:calico-system,Attempt:0,} returns sandbox id \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\"" Sep 5 23:51:59.445551 kubelet[2566]: E0905 23:51:59.445374 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.446606 kubelet[2566]: W0905 23:51:59.445924 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.446606 kubelet[2566]: E0905 23:51:59.445957 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.447563 kubelet[2566]: E0905 23:51:59.446823 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.447563 kubelet[2566]: W0905 23:51:59.446849 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.447563 kubelet[2566]: E0905 23:51:59.446867 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.448702 kubelet[2566]: E0905 23:51:59.447979 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.448702 kubelet[2566]: W0905 23:51:59.448010 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.448702 kubelet[2566]: E0905 23:51:59.448035 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.448838 containerd[1476]: time="2025-09-05T23:51:59.448143060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:51:59.449886 kubelet[2566]: E0905 23:51:59.449687 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.449886 kubelet[2566]: W0905 23:51:59.449713 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.450778 kubelet[2566]: E0905 23:51:59.450146 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.450778 kubelet[2566]: E0905 23:51:59.450748 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.451025 kubelet[2566]: W0905 23:51:59.450760 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.451025 kubelet[2566]: E0905 23:51:59.450976 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.451793 kubelet[2566]: E0905 23:51:59.451773 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.452339 kubelet[2566]: W0905 23:51:59.451841 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.452339 kubelet[2566]: E0905 23:51:59.451888 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.453794 kubelet[2566]: E0905 23:51:59.453629 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.453794 kubelet[2566]: W0905 23:51:59.453667 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.453794 kubelet[2566]: E0905 23:51:59.453742 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.454846 kubelet[2566]: E0905 23:51:59.454818 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.455519 kubelet[2566]: W0905 23:51:59.454904 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.455519 kubelet[2566]: E0905 23:51:59.455392 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.455989 kubelet[2566]: E0905 23:51:59.455964 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.456083 kubelet[2566]: W0905 23:51:59.456067 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.456265 kubelet[2566]: E0905 23:51:59.456196 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.456881 kubelet[2566]: E0905 23:51:59.456758 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.456881 kubelet[2566]: W0905 23:51:59.456774 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.456881 kubelet[2566]: E0905 23:51:59.456830 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.459089 kubelet[2566]: E0905 23:51:59.458920 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.459089 kubelet[2566]: W0905 23:51:59.458944 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.459089 kubelet[2566]: E0905 23:51:59.459006 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.459806 kubelet[2566]: E0905 23:51:59.459673 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.459806 kubelet[2566]: W0905 23:51:59.459696 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.459806 kubelet[2566]: E0905 23:51:59.459744 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.460174 kubelet[2566]: E0905 23:51:59.459979 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.460174 kubelet[2566]: W0905 23:51:59.459990 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.460174 kubelet[2566]: E0905 23:51:59.460036 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.461284 kubelet[2566]: E0905 23:51:59.461263 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.461456 kubelet[2566]: W0905 23:51:59.461360 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.461721 kubelet[2566]: E0905 23:51:59.461685 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.462697 kubelet[2566]: E0905 23:51:59.462521 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.462697 kubelet[2566]: W0905 23:51:59.462559 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.462829 kubelet[2566]: E0905 23:51:59.462799 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.463040 kubelet[2566]: E0905 23:51:59.463012 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.463040 kubelet[2566]: W0905 23:51:59.463025 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.463272 kubelet[2566]: E0905 23:51:59.463180 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.463446 kubelet[2566]: E0905 23:51:59.463434 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.463600 kubelet[2566]: W0905 23:51:59.463488 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.464780 kubelet[2566]: E0905 23:51:59.463532 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.465080 kubelet[2566]: E0905 23:51:59.464948 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.465080 kubelet[2566]: W0905 23:51:59.464963 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.465320 kubelet[2566]: E0905 23:51:59.465296 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.465805 kubelet[2566]: E0905 23:51:59.465511 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.465805 kubelet[2566]: W0905 23:51:59.465525 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.465805 kubelet[2566]: E0905 23:51:59.465600 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.466246 kubelet[2566]: E0905 23:51:59.466227 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.466492 kubelet[2566]: W0905 23:51:59.466366 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.466492 kubelet[2566]: E0905 23:51:59.466423 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.466891 kubelet[2566]: E0905 23:51:59.466777 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.466891 kubelet[2566]: W0905 23:51:59.466796 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.466891 kubelet[2566]: E0905 23:51:59.466840 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.467697 kubelet[2566]: E0905 23:51:59.467564 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.467697 kubelet[2566]: W0905 23:51:59.467595 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.467697 kubelet[2566]: E0905 23:51:59.467647 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.467894 kubelet[2566]: E0905 23:51:59.467864 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.467894 kubelet[2566]: W0905 23:51:59.467879 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.468608 kubelet[2566]: E0905 23:51:59.468040 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.469543 kubelet[2566]: E0905 23:51:59.469410 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.469543 kubelet[2566]: W0905 23:51:59.469430 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.469543 kubelet[2566]: E0905 23:51:59.469453 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.470623 kubelet[2566]: E0905 23:51:59.469798 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.470623 kubelet[2566]: W0905 23:51:59.469813 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.470623 kubelet[2566]: E0905 23:51:59.469829 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.506191 kubelet[2566]: E0905 23:51:59.506136 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:51:59.506191 kubelet[2566]: W0905 23:51:59.506167 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:51:59.506191 kubelet[2566]: E0905 23:51:59.506192 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:51:59.511289 containerd[1476]: time="2025-09-05T23:51:59.511245671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-689bc7bc6f-tsk9z,Uid:2b172cd3-8c88-4449-9b63-26ff059e9602,Namespace:calico-system,Attempt:0,} returns sandbox id \"aaacdb5ed2de52617034ddd4feb486f6f08dffdc9c34c0e6f988a425f1b9d2bf\"" Sep 5 23:52:00.758944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581307480.mount: Deactivated successfully. Sep 5 23:52:00.840246 containerd[1476]: time="2025-09-05T23:52:00.839281964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:00.841988 containerd[1476]: time="2025-09-05T23:52:00.841946731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 5 23:52:00.843747 containerd[1476]: time="2025-09-05T23:52:00.843702415Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:00.850194 containerd[1476]: time="2025-09-05T23:52:00.849929872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:00.851881 containerd[1476]: time="2025-09-05T23:52:00.851441316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.403202575s" Sep 5 23:52:00.851881 containerd[1476]: time="2025-09-05T23:52:00.851516116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:52:00.853693 containerd[1476]: time="2025-09-05T23:52:00.853595441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:52:00.856094 containerd[1476]: time="2025-09-05T23:52:00.856043648Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:52:00.877288 containerd[1476]: time="2025-09-05T23:52:00.877195423Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9\"" Sep 5 23:52:00.879435 containerd[1476]: time="2025-09-05T23:52:00.879361749Z" level=info msg="StartContainer for \"fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9\"" Sep 5 23:52:00.920397 systemd[1]: Started cri-containerd-fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9.scope - libcontainer container fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9. Sep 5 23:52:00.966132 containerd[1476]: time="2025-09-05T23:52:00.966016537Z" level=info msg="StartContainer for \"fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9\" returns successfully" Sep 5 23:52:00.981743 systemd[1]: cri-containerd-fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9.scope: Deactivated successfully. Sep 5 23:52:01.013077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9-rootfs.mount: Deactivated successfully. Sep 5 23:52:01.055584 kubelet[2566]: E0905 23:52:01.055078 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:52:01.064333 containerd[1476]: time="2025-09-05T23:52:01.063591630Z" level=info msg="shim disconnected" id=fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9 namespace=k8s.io Sep 5 23:52:01.064333 containerd[1476]: time="2025-09-05T23:52:01.063933230Z" level=warning msg="cleaning up after shim disconnected" id=fd54b5c626575af9287fda2efce5bb106f1dd9d0f3dbd21d0e8c8ae3cb9109d9 namespace=k8s.io Sep 5 23:52:01.064333 containerd[1476]: time="2025-09-05T23:52:01.063945150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:52:02.639217 containerd[1476]: time="2025-09-05T23:52:02.638446935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:02.640411 containerd[1476]: time="2025-09-05T23:52:02.640354580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 5 23:52:02.642261 containerd[1476]: time="2025-09-05T23:52:02.642190104Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:02.644702 containerd[1476]: time="2025-09-05T23:52:02.644626990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:02.645416 containerd[1476]: time="2025-09-05T23:52:02.645244952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.791582551s" Sep 5 23:52:02.645416 containerd[1476]: time="2025-09-05T23:52:02.645282912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:52:02.647139 containerd[1476]: time="2025-09-05T23:52:02.646892556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:52:02.662703 containerd[1476]: time="2025-09-05T23:52:02.662655835Z" level=info msg="CreateContainer within sandbox \"aaacdb5ed2de52617034ddd4feb486f6f08dffdc9c34c0e6f988a425f1b9d2bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:52:02.692707 containerd[1476]: time="2025-09-05T23:52:02.692517910Z" level=info msg="CreateContainer within sandbox \"aaacdb5ed2de52617034ddd4feb486f6f08dffdc9c34c0e6f988a425f1b9d2bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a1caa62b0258b6ed2fbc953fad14eda3a468e75f3ee24a8c1877fe54b348f8f\"" Sep 5 23:52:02.696401 containerd[1476]: time="2025-09-05T23:52:02.695890598Z" level=info msg="StartContainer for \"6a1caa62b0258b6ed2fbc953fad14eda3a468e75f3ee24a8c1877fe54b348f8f\"" Sep 5 23:52:02.729838 systemd[1]: Started cri-containerd-6a1caa62b0258b6ed2fbc953fad14eda3a468e75f3ee24a8c1877fe54b348f8f.scope - libcontainer container 6a1caa62b0258b6ed2fbc953fad14eda3a468e75f3ee24a8c1877fe54b348f8f. Sep 5 23:52:02.769905 containerd[1476]: time="2025-09-05T23:52:02.769776102Z" level=info msg="StartContainer for \"6a1caa62b0258b6ed2fbc953fad14eda3a468e75f3ee24a8c1877fe54b348f8f\" returns successfully" Sep 5 23:52:03.047225 kubelet[2566]: E0905 23:52:03.046814 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:52:04.204804 kubelet[2566]: I0905 23:52:04.204490 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-689bc7bc6f-tsk9z" podStartSLOduration=3.07162726 podStartE2EDuration="6.204470177s" podCreationTimestamp="2025-09-05 23:51:58 +0000 UTC" firstStartedPulling="2025-09-05 23:51:59.513725518 +0000 UTC m=+30.606976912" lastFinishedPulling="2025-09-05 23:52:02.646568355 +0000 UTC m=+33.739819829" observedRunningTime="2025-09-05 23:52:03.237069169 +0000 UTC m=+34.330320563" watchObservedRunningTime="2025-09-05 23:52:04.204470177 +0000 UTC m=+35.297721571" Sep 5 23:52:05.049078 kubelet[2566]: E0905 23:52:05.047237 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:52:05.050398 containerd[1476]: time="2025-09-05T23:52:05.050346045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:05.056290 containerd[1476]: time="2025-09-05T23:52:05.055503417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:52:05.057816 containerd[1476]: time="2025-09-05T23:52:05.057403101Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:05.061703 containerd[1476]: time="2025-09-05T23:52:05.061633311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:05.062859 containerd[1476]: time="2025-09-05T23:52:05.062727473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.415793757s" Sep 5 23:52:05.062859 containerd[1476]: time="2025-09-05T23:52:05.062784833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:52:05.067887 containerd[1476]: time="2025-09-05T23:52:05.067058603Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:52:05.084261 containerd[1476]: time="2025-09-05T23:52:05.084206362Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484\"" Sep 5 23:52:05.085345 containerd[1476]: time="2025-09-05T23:52:05.085313445Z" level=info msg="StartContainer for \"6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484\"" Sep 5 23:52:05.127803 systemd[1]: Started cri-containerd-6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484.scope - libcontainer container 6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484. Sep 5 23:52:05.164684 containerd[1476]: time="2025-09-05T23:52:05.164477906Z" level=info msg="StartContainer for \"6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484\" returns successfully" Sep 5 23:52:05.644662 containerd[1476]: time="2025-09-05T23:52:05.644518326Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:52:05.648299 systemd[1]: cri-containerd-6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484.scope: Deactivated successfully. Sep 5 23:52:05.673802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484-rootfs.mount: Deactivated successfully. Sep 5 23:52:05.702090 kubelet[2566]: I0905 23:52:05.699973 2566 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:52:05.743096 containerd[1476]: time="2025-09-05T23:52:05.743026191Z" level=info msg="shim disconnected" id=6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484 namespace=k8s.io Sep 5 23:52:05.743096 containerd[1476]: time="2025-09-05T23:52:05.743100391Z" level=warning msg="cleaning up after shim disconnected" id=6b8c3562d8237be959d94edf1130149006a696fd3f25fe243ff6304c2c9a4484 namespace=k8s.io Sep 5 23:52:05.743096 containerd[1476]: time="2025-09-05T23:52:05.743110351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:52:05.771213 systemd[1]: Created slice kubepods-burstable-pod79bda304_0fec_4e72_8abd_1ec79680ee8b.slice - libcontainer container kubepods-burstable-pod79bda304_0fec_4e72_8abd_1ec79680ee8b.slice. Sep 5 23:52:05.796721 containerd[1476]: time="2025-09-05T23:52:05.796656634Z" level=warning msg="cleanup warnings time=\"2025-09-05T23:52:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 23:52:05.804155 kubelet[2566]: I0905 23:52:05.804095 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-ca-bundle\") pod \"whisker-6c798c5dc4-n4jj8\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " pod="calico-system/whisker-6c798c5dc4-n4jj8" Sep 5 23:52:05.804155 kubelet[2566]: I0905 23:52:05.804152 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5-calico-apiserver-certs\") pod \"calico-apiserver-6fdcb9f565-lqnhs\" (UID: \"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5\") " pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" Sep 5 23:52:05.804322 kubelet[2566]: I0905 23:52:05.804178 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzhb\" (UniqueName: \"kubernetes.io/projected/4e4cbde1-da6c-4658-8a40-e74e4e58e9f5-kube-api-access-ltzhb\") pod \"coredns-668d6bf9bc-jphsd\" (UID: \"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5\") " pod="kube-system/coredns-668d6bf9bc-jphsd" Sep 5 23:52:05.804322 kubelet[2566]: I0905 23:52:05.804200 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprvv\" (UniqueName: \"kubernetes.io/projected/79bda304-0fec-4e72-8abd-1ec79680ee8b-kube-api-access-nprvv\") pod \"coredns-668d6bf9bc-xstw4\" (UID: \"79bda304-0fec-4e72-8abd-1ec79680ee8b\") " pod="kube-system/coredns-668d6bf9bc-xstw4" Sep 5 23:52:05.804322 kubelet[2566]: I0905 23:52:05.804221 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84xb\" (UniqueName: \"kubernetes.io/projected/410fb30a-cedd-4b7f-8771-5ff3acc63254-kube-api-access-q84xb\") pod \"whisker-6c798c5dc4-n4jj8\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " pod="calico-system/whisker-6c798c5dc4-n4jj8" Sep 5 23:52:05.804322 kubelet[2566]: I0905 23:52:05.804248 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4cbde1-da6c-4658-8a40-e74e4e58e9f5-config-volume\") pod \"coredns-668d6bf9bc-jphsd\" (UID: \"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5\") " pod="kube-system/coredns-668d6bf9bc-jphsd" Sep 5 23:52:05.804322 kubelet[2566]: I0905 23:52:05.804272 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac5769-ff1d-4260-a1f9-1ac84156590f-tigera-ca-bundle\") pod \"calico-kube-controllers-68d58495b-ch6j7\" (UID: \"08ac5769-ff1d-4260-a1f9-1ac84156590f\") " pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" Sep 5 23:52:05.804444 kubelet[2566]: I0905 23:52:05.804294 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn86z\" (UniqueName: \"kubernetes.io/projected/08ac5769-ff1d-4260-a1f9-1ac84156590f-kube-api-access-pn86z\") pod \"calico-kube-controllers-68d58495b-ch6j7\" (UID: \"08ac5769-ff1d-4260-a1f9-1ac84156590f\") " pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" Sep 5 23:52:05.804444 kubelet[2566]: I0905 23:52:05.804319 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr92x\" (UniqueName: \"kubernetes.io/projected/c8fb7313-1f3f-4238-86b4-1214e62f55c2-kube-api-access-dr92x\") pod \"calico-apiserver-6fdcb9f565-bpnlv\" (UID: \"c8fb7313-1f3f-4238-86b4-1214e62f55c2\") " pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" Sep 5 23:52:05.804444 kubelet[2566]: I0905 23:52:05.804338 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-backend-key-pair\") pod \"whisker-6c798c5dc4-n4jj8\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " pod="calico-system/whisker-6c798c5dc4-n4jj8" Sep 5 23:52:05.804444 kubelet[2566]: I0905 23:52:05.804356 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8fb7313-1f3f-4238-86b4-1214e62f55c2-calico-apiserver-certs\") pod \"calico-apiserver-6fdcb9f565-bpnlv\" (UID: \"c8fb7313-1f3f-4238-86b4-1214e62f55c2\") " pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" Sep 5 23:52:05.804444 kubelet[2566]: I0905 23:52:05.804380 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79bda304-0fec-4e72-8abd-1ec79680ee8b-config-volume\") pod \"coredns-668d6bf9bc-xstw4\" (UID: \"79bda304-0fec-4e72-8abd-1ec79680ee8b\") " pod="kube-system/coredns-668d6bf9bc-xstw4" Sep 5 23:52:05.805485 kubelet[2566]: I0905 23:52:05.804399 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjspc\" (UniqueName: \"kubernetes.io/projected/ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5-kube-api-access-kjspc\") pod \"calico-apiserver-6fdcb9f565-lqnhs\" (UID: \"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5\") " pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" Sep 5 23:52:05.813140 systemd[1]: Created slice kubepods-besteffort-pod08ac5769_ff1d_4260_a1f9_1ac84156590f.slice - libcontainer container kubepods-besteffort-pod08ac5769_ff1d_4260_a1f9_1ac84156590f.slice. Sep 5 23:52:05.828932 systemd[1]: Created slice kubepods-besteffort-podea42d30b_f6e9_471f_b9c8_2a775dbbc9d5.slice - libcontainer container kubepods-besteffort-podea42d30b_f6e9_471f_b9c8_2a775dbbc9d5.slice. Sep 5 23:52:05.838757 systemd[1]: Created slice kubepods-burstable-pod4e4cbde1_da6c_4658_8a40_e74e4e58e9f5.slice - libcontainer container kubepods-burstable-pod4e4cbde1_da6c_4658_8a40_e74e4e58e9f5.slice. Sep 5 23:52:05.846796 systemd[1]: Created slice kubepods-besteffort-pod410fb30a_cedd_4b7f_8771_5ff3acc63254.slice - libcontainer container kubepods-besteffort-pod410fb30a_cedd_4b7f_8771_5ff3acc63254.slice. Sep 5 23:52:05.857406 systemd[1]: Created slice kubepods-besteffort-podc8fb7313_1f3f_4238_86b4_1214e62f55c2.slice - libcontainer container kubepods-besteffort-podc8fb7313_1f3f_4238_86b4_1214e62f55c2.slice. Sep 5 23:52:05.866328 systemd[1]: Created slice kubepods-besteffort-pod42e44fef_e5ea_4ffe_8404_404c1a1ddfc1.slice - libcontainer container kubepods-besteffort-pod42e44fef_e5ea_4ffe_8404_404c1a1ddfc1.slice. Sep 5 23:52:05.905693 kubelet[2566]: I0905 23:52:05.905404 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2dj\" (UniqueName: \"kubernetes.io/projected/42e44fef-e5ea-4ffe-8404-404c1a1ddfc1-kube-api-access-4l2dj\") pod \"goldmane-54d579b49d-wbmmx\" (UID: \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\") " pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:05.905693 kubelet[2566]: I0905 23:52:05.905480 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e44fef-e5ea-4ffe-8404-404c1a1ddfc1-config\") pod \"goldmane-54d579b49d-wbmmx\" (UID: \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\") " pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:05.906332 kubelet[2566]: I0905 23:52:05.906110 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42e44fef-e5ea-4ffe-8404-404c1a1ddfc1-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-wbmmx\" (UID: \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\") " pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:05.907183 kubelet[2566]: I0905 23:52:05.907036 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42e44fef-e5ea-4ffe-8404-404c1a1ddfc1-goldmane-key-pair\") pod \"goldmane-54d579b49d-wbmmx\" (UID: \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\") " pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:06.096328 containerd[1476]: time="2025-09-05T23:52:06.096282434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xstw4,Uid:79bda304-0fec-4e72-8abd-1ec79680ee8b,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:06.123402 containerd[1476]: time="2025-09-05T23:52:06.123353094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58495b-ch6j7,Uid:08ac5769-ff1d-4260-a1f9-1ac84156590f,Namespace:calico-system,Attempt:0,}" Sep 5 23:52:06.137388 containerd[1476]: time="2025-09-05T23:52:06.137346766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-lqnhs,Uid:ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:52:06.151917 containerd[1476]: time="2025-09-05T23:52:06.151671078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jphsd,Uid:4e4cbde1-da6c-4658-8a40-e74e4e58e9f5,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:06.152745 containerd[1476]: time="2025-09-05T23:52:06.152706640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c798c5dc4-n4jj8,Uid:410fb30a-cedd-4b7f-8771-5ff3acc63254,Namespace:calico-system,Attempt:0,}" Sep 5 23:52:06.178891 containerd[1476]: time="2025-09-05T23:52:06.178758258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wbmmx,Uid:42e44fef-e5ea-4ffe-8404-404c1a1ddfc1,Namespace:calico-system,Attempt:0,}" Sep 5 23:52:06.180046 containerd[1476]: time="2025-09-05T23:52:06.179746260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-bpnlv,Uid:c8fb7313-1f3f-4238-86b4-1214e62f55c2,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:52:06.201589 containerd[1476]: time="2025-09-05T23:52:06.201507669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:52:06.259420 containerd[1476]: time="2025-09-05T23:52:06.259373317Z" level=error msg="Failed to destroy network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.263556 containerd[1476]: time="2025-09-05T23:52:06.263449167Z" level=error msg="encountered an error cleaning up failed sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.263556 containerd[1476]: time="2025-09-05T23:52:06.263531127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58495b-ch6j7,Uid:08ac5769-ff1d-4260-a1f9-1ac84156590f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.264522 kubelet[2566]: E0905 23:52:06.264469 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.264677 kubelet[2566]: E0905 23:52:06.264559 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" Sep 5 23:52:06.264677 kubelet[2566]: E0905 23:52:06.264580 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" Sep 5 23:52:06.264677 kubelet[2566]: E0905 23:52:06.264642 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58495b-ch6j7_calico-system(08ac5769-ff1d-4260-a1f9-1ac84156590f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58495b-ch6j7_calico-system(08ac5769-ff1d-4260-a1f9-1ac84156590f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" podUID="08ac5769-ff1d-4260-a1f9-1ac84156590f" Sep 5 23:52:06.292077 containerd[1476]: time="2025-09-05T23:52:06.292028710Z" level=error msg="Failed to destroy network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.292689 containerd[1476]: time="2025-09-05T23:52:06.292656672Z" level=error msg="encountered an error cleaning up failed sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.293905 containerd[1476]: time="2025-09-05T23:52:06.293498593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xstw4,Uid:79bda304-0fec-4e72-8abd-1ec79680ee8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.294570 kubelet[2566]: E0905 23:52:06.294184 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.294570 kubelet[2566]: E0905 23:52:06.294243 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xstw4" Sep 5 23:52:06.294570 kubelet[2566]: E0905 23:52:06.294263 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xstw4" Sep 5 23:52:06.295057 kubelet[2566]: E0905 23:52:06.294305 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xstw4_kube-system(79bda304-0fec-4e72-8abd-1ec79680ee8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xstw4_kube-system(79bda304-0fec-4e72-8abd-1ec79680ee8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xstw4" podUID="79bda304-0fec-4e72-8abd-1ec79680ee8b" Sep 5 23:52:06.350164 containerd[1476]: time="2025-09-05T23:52:06.350016839Z" level=error msg="Failed to destroy network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.353831 containerd[1476]: time="2025-09-05T23:52:06.353776728Z" level=error msg="encountered an error cleaning up failed sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.354097 containerd[1476]: time="2025-09-05T23:52:06.353982768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-lqnhs,Uid:ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.354391 kubelet[2566]: E0905 23:52:06.354241 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.354391 kubelet[2566]: E0905 23:52:06.354300 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" Sep 5 23:52:06.354391 kubelet[2566]: E0905 23:52:06.354323 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" Sep 5 23:52:06.354563 kubelet[2566]: E0905 23:52:06.354373 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fdcb9f565-lqnhs_calico-apiserver(ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fdcb9f565-lqnhs_calico-apiserver(ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" podUID="ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5" Sep 5 23:52:06.374453 containerd[1476]: time="2025-09-05T23:52:06.374408334Z" level=error msg="Failed to destroy network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.375560 containerd[1476]: time="2025-09-05T23:52:06.375283096Z" level=error msg="encountered an error cleaning up failed sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.375560 containerd[1476]: time="2025-09-05T23:52:06.375341056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jphsd,Uid:4e4cbde1-da6c-4658-8a40-e74e4e58e9f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.376045 kubelet[2566]: E0905 23:52:06.375567 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.376045 kubelet[2566]: E0905 23:52:06.375650 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jphsd" Sep 5 23:52:06.376045 kubelet[2566]: E0905 23:52:06.375668 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jphsd" Sep 5 23:52:06.376260 kubelet[2566]: E0905 23:52:06.375849 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jphsd_kube-system(4e4cbde1-da6c-4658-8a40-e74e4e58e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jphsd_kube-system(4e4cbde1-da6c-4658-8a40-e74e4e58e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jphsd" podUID="4e4cbde1-da6c-4658-8a40-e74e4e58e9f5" Sep 5 23:52:06.396751 containerd[1476]: time="2025-09-05T23:52:06.396703143Z" level=error msg="Failed to destroy network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.397310 containerd[1476]: time="2025-09-05T23:52:06.397171664Z" level=error msg="encountered an error cleaning up failed sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.397310 containerd[1476]: time="2025-09-05T23:52:06.397221185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c798c5dc4-n4jj8,Uid:410fb30a-cedd-4b7f-8771-5ff3acc63254,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.397991 kubelet[2566]: E0905 23:52:06.397622 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.397991 kubelet[2566]: E0905 23:52:06.397677 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c798c5dc4-n4jj8" Sep 5 23:52:06.397991 kubelet[2566]: E0905 23:52:06.397703 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c798c5dc4-n4jj8" Sep 5 23:52:06.398158 kubelet[2566]: E0905 23:52:06.397741 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c798c5dc4-n4jj8_calico-system(410fb30a-cedd-4b7f-8771-5ff3acc63254)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c798c5dc4-n4jj8_calico-system(410fb30a-cedd-4b7f-8771-5ff3acc63254)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c798c5dc4-n4jj8" podUID="410fb30a-cedd-4b7f-8771-5ff3acc63254" Sep 5 23:52:06.415087 containerd[1476]: time="2025-09-05T23:52:06.414993984Z" level=error msg="Failed to destroy network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.415696 containerd[1476]: time="2025-09-05T23:52:06.415528905Z" level=error msg="encountered an error cleaning up failed sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.415696 containerd[1476]: time="2025-09-05T23:52:06.415636066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wbmmx,Uid:42e44fef-e5ea-4ffe-8404-404c1a1ddfc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.416377 kubelet[2566]: E0905 23:52:06.416009 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.416377 kubelet[2566]: E0905 23:52:06.416062 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:06.416377 kubelet[2566]: E0905 23:52:06.416081 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wbmmx" Sep 5 23:52:06.416588 kubelet[2566]: E0905 23:52:06.416134 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-wbmmx_calico-system(42e44fef-e5ea-4ffe-8404-404c1a1ddfc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-wbmmx_calico-system(42e44fef-e5ea-4ffe-8404-404c1a1ddfc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-wbmmx" podUID="42e44fef-e5ea-4ffe-8404-404c1a1ddfc1" Sep 5 23:52:06.416936 containerd[1476]: time="2025-09-05T23:52:06.416834948Z" level=error msg="Failed to destroy network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.417219 containerd[1476]: time="2025-09-05T23:52:06.417193589Z" level=error msg="encountered an error cleaning up failed sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.417388 containerd[1476]: time="2025-09-05T23:52:06.417304469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-bpnlv,Uid:c8fb7313-1f3f-4238-86b4-1214e62f55c2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.417842 kubelet[2566]: E0905 23:52:06.417684 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:06.417842 kubelet[2566]: E0905 23:52:06.417738 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" Sep 5 23:52:06.417842 kubelet[2566]: E0905 23:52:06.417757 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" Sep 5 23:52:06.417963 kubelet[2566]: E0905 23:52:06.417804 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fdcb9f565-bpnlv_calico-apiserver(c8fb7313-1f3f-4238-86b4-1214e62f55c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fdcb9f565-bpnlv_calico-apiserver(c8fb7313-1f3f-4238-86b4-1214e62f55c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" podUID="c8fb7313-1f3f-4238-86b4-1214e62f55c2" Sep 5 23:52:07.058747 systemd[1]: Created slice kubepods-besteffort-pod7f45c648_2314_4d67_9e44_04067f334c76.slice - libcontainer container kubepods-besteffort-pod7f45c648_2314_4d67_9e44_04067f334c76.slice. Sep 5 23:52:07.062439 containerd[1476]: time="2025-09-05T23:52:07.062368223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tlkz,Uid:7f45c648-2314-4d67-9e44-04067f334c76,Namespace:calico-system,Attempt:0,}" Sep 5 23:52:07.084271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178-shm.mount: Deactivated successfully. Sep 5 23:52:07.084658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6-shm.mount: Deactivated successfully. Sep 5 23:52:07.128975 containerd[1476]: time="2025-09-05T23:52:07.128892767Z" level=error msg="Failed to destroy network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.131642 containerd[1476]: time="2025-09-05T23:52:07.129677369Z" level=error msg="encountered an error cleaning up failed sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.131642 containerd[1476]: time="2025-09-05T23:52:07.129745489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tlkz,Uid:7f45c648-2314-4d67-9e44-04067f334c76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.131800 kubelet[2566]: E0905 23:52:07.129969 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.131800 kubelet[2566]: E0905 23:52:07.130022 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:52:07.131800 kubelet[2566]: E0905 23:52:07.130045 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tlkz" Sep 5 23:52:07.132101 kubelet[2566]: E0905 23:52:07.130080 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6tlkz_calico-system(7f45c648-2314-4d67-9e44-04067f334c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6tlkz_calico-system(7f45c648-2314-4d67-9e44-04067f334c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:52:07.134858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2-shm.mount: Deactivated successfully. Sep 5 23:52:07.200670 kubelet[2566]: I0905 23:52:07.199952 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:07.202472 containerd[1476]: time="2025-09-05T23:52:07.201980245Z" level=info msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" Sep 5 23:52:07.202472 containerd[1476]: time="2025-09-05T23:52:07.202172566Z" level=info msg="Ensure that sandbox a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2 in task-service has been cleanup successfully" Sep 5 23:52:07.204485 kubelet[2566]: I0905 23:52:07.204361 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:07.207155 containerd[1476]: time="2025-09-05T23:52:07.206366495Z" level=info msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" Sep 5 23:52:07.207155 containerd[1476]: time="2025-09-05T23:52:07.206945096Z" level=info msg="Ensure that sandbox 0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11 in task-service has been cleanup successfully" Sep 5 23:52:07.207382 kubelet[2566]: I0905 23:52:07.206845 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:07.207773 containerd[1476]: time="2025-09-05T23:52:07.207747538Z" level=info msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" Sep 5 23:52:07.208340 containerd[1476]: time="2025-09-05T23:52:07.208315059Z" level=info msg="Ensure that sandbox 3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e in task-service has been cleanup successfully" Sep 5 23:52:07.210823 kubelet[2566]: I0905 23:52:07.210476 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:07.212497 containerd[1476]: time="2025-09-05T23:52:07.212463308Z" level=info msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" Sep 5 23:52:07.213233 containerd[1476]: time="2025-09-05T23:52:07.213031709Z" level=info msg="Ensure that sandbox ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178 in task-service has been cleanup successfully" Sep 5 23:52:07.216510 kubelet[2566]: I0905 23:52:07.216476 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:07.218063 containerd[1476]: time="2025-09-05T23:52:07.218028240Z" level=info msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" Sep 5 23:52:07.219459 containerd[1476]: time="2025-09-05T23:52:07.219227083Z" level=info msg="Ensure that sandbox 7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001 in task-service has been cleanup successfully" Sep 5 23:52:07.220857 kubelet[2566]: I0905 23:52:07.220820 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:07.225135 containerd[1476]: time="2025-09-05T23:52:07.224353854Z" level=info msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" Sep 5 23:52:07.228867 containerd[1476]: time="2025-09-05T23:52:07.227722101Z" level=info msg="Ensure that sandbox 426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53 in task-service has been cleanup successfully" Sep 5 23:52:07.229841 kubelet[2566]: I0905 23:52:07.229799 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:07.235235 containerd[1476]: time="2025-09-05T23:52:07.235124117Z" level=info msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" Sep 5 23:52:07.237052 containerd[1476]: time="2025-09-05T23:52:07.236906441Z" level=info msg="Ensure that sandbox ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670 in task-service has been cleanup successfully" Sep 5 23:52:07.240160 kubelet[2566]: I0905 23:52:07.240086 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:07.247639 containerd[1476]: time="2025-09-05T23:52:07.247438224Z" level=info msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" Sep 5 23:52:07.249142 containerd[1476]: time="2025-09-05T23:52:07.248848627Z" level=info msg="Ensure that sandbox 68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6 in task-service has been cleanup successfully" Sep 5 23:52:07.347214 containerd[1476]: time="2025-09-05T23:52:07.347063320Z" level=error msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" failed" error="failed to destroy network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.347963 kubelet[2566]: E0905 23:52:07.347330 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:07.347963 kubelet[2566]: E0905 23:52:07.347404 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11"} Sep 5 23:52:07.347963 kubelet[2566]: E0905 23:52:07.347469 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.347963 kubelet[2566]: E0905 23:52:07.347489 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jphsd" podUID="4e4cbde1-da6c-4658-8a40-e74e4e58e9f5" Sep 5 23:52:07.354409 containerd[1476]: time="2025-09-05T23:52:07.353852135Z" level=error msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" failed" error="failed to destroy network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.356365 containerd[1476]: time="2025-09-05T23:52:07.356299620Z" level=error msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" failed" error="failed to destroy network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.357376 kubelet[2566]: E0905 23:52:07.357331 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:07.357482 kubelet[2566]: E0905 23:52:07.357385 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001"} Sep 5 23:52:07.357482 kubelet[2566]: E0905 23:52:07.357425 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"410fb30a-cedd-4b7f-8771-5ff3acc63254\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.357482 kubelet[2566]: E0905 23:52:07.357445 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"410fb30a-cedd-4b7f-8771-5ff3acc63254\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c798c5dc4-n4jj8" podUID="410fb30a-cedd-4b7f-8771-5ff3acc63254" Sep 5 23:52:07.357482 kubelet[2566]: E0905 23:52:07.357331 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:07.357482 kubelet[2566]: E0905 23:52:07.357472 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2"} Sep 5 23:52:07.357753 kubelet[2566]: E0905 23:52:07.357489 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f45c648-2314-4d67-9e44-04067f334c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.357753 kubelet[2566]: E0905 23:52:07.357503 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f45c648-2314-4d67-9e44-04067f334c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6tlkz" podUID="7f45c648-2314-4d67-9e44-04067f334c76" Sep 5 23:52:07.358453 containerd[1476]: time="2025-09-05T23:52:07.358406664Z" level=error msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" failed" error="failed to destroy network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.358801 containerd[1476]: time="2025-09-05T23:52:07.358729665Z" level=error msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" failed" error="failed to destroy network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.359184 kubelet[2566]: E0905 23:52:07.358913 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:07.359184 kubelet[2566]: E0905 23:52:07.358969 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e"} Sep 5 23:52:07.359184 kubelet[2566]: E0905 23:52:07.358913 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:07.359184 kubelet[2566]: E0905 23:52:07.359023 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178"} Sep 5 23:52:07.359184 kubelet[2566]: E0905 23:52:07.359044 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08ac5769-ff1d-4260-a1f9-1ac84156590f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.359374 kubelet[2566]: E0905 23:52:07.359078 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08ac5769-ff1d-4260-a1f9-1ac84156590f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" podUID="08ac5769-ff1d-4260-a1f9-1ac84156590f" Sep 5 23:52:07.359374 kubelet[2566]: E0905 23:52:07.359112 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.359374 kubelet[2566]: E0905 23:52:07.359130 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" podUID="ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5" Sep 5 23:52:07.364724 containerd[1476]: time="2025-09-05T23:52:07.363637076Z" level=error msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" failed" error="failed to destroy network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.364851 kubelet[2566]: E0905 23:52:07.363907 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:07.364851 kubelet[2566]: E0905 23:52:07.363956 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53"} Sep 5 23:52:07.364851 kubelet[2566]: E0905 23:52:07.363988 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.364851 kubelet[2566]: E0905 23:52:07.364017 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-wbmmx" podUID="42e44fef-e5ea-4ffe-8404-404c1a1ddfc1" Sep 5 23:52:07.365305 containerd[1476]: time="2025-09-05T23:52:07.365263639Z" level=error msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" failed" error="failed to destroy network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.365656 kubelet[2566]: E0905 23:52:07.365459 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:07.365656 kubelet[2566]: E0905 23:52:07.365513 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670"} Sep 5 23:52:07.365656 kubelet[2566]: E0905 23:52:07.365576 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8fb7313-1f3f-4238-86b4-1214e62f55c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.365656 kubelet[2566]: E0905 23:52:07.365628 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8fb7313-1f3f-4238-86b4-1214e62f55c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" podUID="c8fb7313-1f3f-4238-86b4-1214e62f55c2" Sep 5 23:52:07.366795 containerd[1476]: time="2025-09-05T23:52:07.366753482Z" level=error msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" failed" error="failed to destroy network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:52:07.367109 kubelet[2566]: E0905 23:52:07.367047 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:07.367109 kubelet[2566]: E0905 23:52:07.367098 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6"} Sep 5 23:52:07.367233 kubelet[2566]: E0905 23:52:07.367126 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79bda304-0fec-4e72-8abd-1ec79680ee8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:52:07.367233 kubelet[2566]: E0905 23:52:07.367145 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79bda304-0fec-4e72-8abd-1ec79680ee8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xstw4" podUID="79bda304-0fec-4e72-8abd-1ec79680ee8b" Sep 5 23:52:10.245350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673522464.mount: Deactivated successfully. Sep 5 23:52:10.274406 containerd[1476]: time="2025-09-05T23:52:10.274316885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:10.275935 containerd[1476]: time="2025-09-05T23:52:10.275876328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:52:10.276746 containerd[1476]: time="2025-09-05T23:52:10.276706690Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:10.279645 containerd[1476]: time="2025-09-05T23:52:10.279589736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:10.281005 containerd[1476]: time="2025-09-05T23:52:10.280959659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.07936783s" Sep 5 23:52:10.281005 containerd[1476]: time="2025-09-05T23:52:10.280998339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:52:10.313790 containerd[1476]: time="2025-09-05T23:52:10.313643204Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:52:10.352514 containerd[1476]: time="2025-09-05T23:52:10.352456641Z" level=info msg="CreateContainer within sandbox \"89717b886e53f85f7084cfe4e0b9d3b1fd22d93dad91f328fbbd7d8a0a715aa6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a\"" Sep 5 23:52:10.354247 containerd[1476]: time="2025-09-05T23:52:10.354216405Z" level=info msg="StartContainer for \"a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a\"" Sep 5 23:52:10.400800 systemd[1]: Started cri-containerd-a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a.scope - libcontainer container a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a. Sep 5 23:52:10.432479 containerd[1476]: time="2025-09-05T23:52:10.432430881Z" level=info msg="StartContainer for \"a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a\" returns successfully" Sep 5 23:52:10.585772 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:52:10.585880 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:52:10.741983 containerd[1476]: time="2025-09-05T23:52:10.741632059Z" level=info msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.849 [INFO][3731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.850 [INFO][3731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" iface="eth0" netns="/var/run/netns/cni-7c59afe6-2377-4e8e-c5fc-297f76daa02a" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.850 [INFO][3731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" iface="eth0" netns="/var/run/netns/cni-7c59afe6-2377-4e8e-c5fc-297f76daa02a" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.850 [INFO][3731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" iface="eth0" netns="/var/run/netns/cni-7c59afe6-2377-4e8e-c5fc-297f76daa02a" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.850 [INFO][3731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.850 [INFO][3731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.902 [INFO][3739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.902 [INFO][3739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.902 [INFO][3739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.917 [WARNING][3739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.917 [INFO][3739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.920 [INFO][3739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:10.927946 containerd[1476]: 2025-09-05 23:52:10.923 [INFO][3731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:10.930693 containerd[1476]: time="2025-09-05T23:52:10.930638357Z" level=info msg="TearDown network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" successfully" Sep 5 23:52:10.930693 containerd[1476]: time="2025-09-05T23:52:10.930685397Z" level=info msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" returns successfully" Sep 5 23:52:11.054335 kubelet[2566]: I0905 23:52:11.054215 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "410fb30a-cedd-4b7f-8771-5ff3acc63254" (UID: "410fb30a-cedd-4b7f-8771-5ff3acc63254"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:52:11.057681 kubelet[2566]: I0905 23:52:11.057578 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-ca-bundle\") pod \"410fb30a-cedd-4b7f-8771-5ff3acc63254\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " Sep 5 23:52:11.057681 kubelet[2566]: I0905 23:52:11.057663 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q84xb\" (UniqueName: \"kubernetes.io/projected/410fb30a-cedd-4b7f-8771-5ff3acc63254-kube-api-access-q84xb\") pod \"410fb30a-cedd-4b7f-8771-5ff3acc63254\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " Sep 5 23:52:11.057836 kubelet[2566]: I0905 23:52:11.057697 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-backend-key-pair\") pod \"410fb30a-cedd-4b7f-8771-5ff3acc63254\" (UID: \"410fb30a-cedd-4b7f-8771-5ff3acc63254\") " Sep 5 23:52:11.057836 kubelet[2566]: I0905 23:52:11.057779 2566 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-ca-bundle\") on node \"ci-4081-3-5-n-2b989ca6ad\" DevicePath \"\"" Sep 5 23:52:11.061749 kubelet[2566]: I0905 23:52:11.061371 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "410fb30a-cedd-4b7f-8771-5ff3acc63254" (UID: "410fb30a-cedd-4b7f-8771-5ff3acc63254"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:52:11.063526 kubelet[2566]: I0905 23:52:11.063483 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410fb30a-cedd-4b7f-8771-5ff3acc63254-kube-api-access-q84xb" (OuterVolumeSpecName: "kube-api-access-q84xb") pod "410fb30a-cedd-4b7f-8771-5ff3acc63254" (UID: "410fb30a-cedd-4b7f-8771-5ff3acc63254"). InnerVolumeSpecName "kube-api-access-q84xb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:52:11.158958 kubelet[2566]: I0905 23:52:11.158883 2566 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/410fb30a-cedd-4b7f-8771-5ff3acc63254-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-2b989ca6ad\" DevicePath \"\"" Sep 5 23:52:11.158958 kubelet[2566]: I0905 23:52:11.158952 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q84xb\" (UniqueName: \"kubernetes.io/projected/410fb30a-cedd-4b7f-8771-5ff3acc63254-kube-api-access-q84xb\") on node \"ci-4081-3-5-n-2b989ca6ad\" DevicePath \"\"" Sep 5 23:52:11.248884 systemd[1]: run-netns-cni\x2d7c59afe6\x2d2377\x2d4e8e\x2dc5fc\x2d297f76daa02a.mount: Deactivated successfully. Sep 5 23:52:11.249113 systemd[1]: var-lib-kubelet-pods-410fb30a\x2dcedd\x2d4b7f\x2d8771\x2d5ff3acc63254-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq84xb.mount: Deactivated successfully. Sep 5 23:52:11.249861 systemd[1]: var-lib-kubelet-pods-410fb30a\x2dcedd\x2d4b7f\x2d8771\x2d5ff3acc63254-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:52:11.263743 systemd[1]: Removed slice kubepods-besteffort-pod410fb30a_cedd_4b7f_8771_5ff3acc63254.slice - libcontainer container kubepods-besteffort-pod410fb30a_cedd_4b7f_8771_5ff3acc63254.slice. Sep 5 23:52:11.297329 kubelet[2566]: I0905 23:52:11.296531 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l92rg" podStartSLOduration=2.460980667 podStartE2EDuration="13.296512232s" podCreationTimestamp="2025-09-05 23:51:58 +0000 UTC" firstStartedPulling="2025-09-05 23:51:59.446429816 +0000 UTC m=+30.539681170" lastFinishedPulling="2025-09-05 23:52:10.281961341 +0000 UTC m=+41.375212735" observedRunningTime="2025-09-05 23:52:11.287846935 +0000 UTC m=+42.381098329" watchObservedRunningTime="2025-09-05 23:52:11.296512232 +0000 UTC m=+42.389763586" Sep 5 23:52:11.376351 systemd[1]: Created slice kubepods-besteffort-pod3e691cca_93dc_4eef_9c75_63f792f0cb2b.slice - libcontainer container kubepods-besteffort-pod3e691cca_93dc_4eef_9c75_63f792f0cb2b.slice. Sep 5 23:52:11.459992 kubelet[2566]: I0905 23:52:11.459898 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e691cca-93dc-4eef-9c75-63f792f0cb2b-whisker-backend-key-pair\") pod \"whisker-5fcd84477-qvb4g\" (UID: \"3e691cca-93dc-4eef-9c75-63f792f0cb2b\") " pod="calico-system/whisker-5fcd84477-qvb4g" Sep 5 23:52:11.459992 kubelet[2566]: I0905 23:52:11.459971 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e691cca-93dc-4eef-9c75-63f792f0cb2b-whisker-ca-bundle\") pod \"whisker-5fcd84477-qvb4g\" (UID: \"3e691cca-93dc-4eef-9c75-63f792f0cb2b\") " pod="calico-system/whisker-5fcd84477-qvb4g" Sep 5 23:52:11.459992 kubelet[2566]: I0905 23:52:11.459996 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxc2z\" (UniqueName: \"kubernetes.io/projected/3e691cca-93dc-4eef-9c75-63f792f0cb2b-kube-api-access-qxc2z\") pod \"whisker-5fcd84477-qvb4g\" (UID: \"3e691cca-93dc-4eef-9c75-63f792f0cb2b\") " pod="calico-system/whisker-5fcd84477-qvb4g" Sep 5 23:52:11.681840 containerd[1476]: time="2025-09-05T23:52:11.681710821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcd84477-qvb4g,Uid:3e691cca-93dc-4eef-9c75-63f792f0cb2b,Namespace:calico-system,Attempt:0,}" Sep 5 23:52:11.838602 systemd-networkd[1376]: cali1e08aa78cff: Link UP Sep 5 23:52:11.840156 systemd-networkd[1376]: cali1e08aa78cff: Gained carrier Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.730 [INFO][3782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.746 [INFO][3782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0 whisker-5fcd84477- calico-system 3e691cca-93dc-4eef-9c75-63f792f0cb2b 918 0 2025-09-05 23:52:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fcd84477 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad whisker-5fcd84477-qvb4g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1e08aa78cff [] [] }} ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.746 [INFO][3782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.776 [INFO][3793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" HandleID="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.776 [INFO][3793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" HandleID="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"whisker-5fcd84477-qvb4g", "timestamp":"2025-09-05 23:52:11.776736686 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.777 [INFO][3793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.777 [INFO][3793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.777 [INFO][3793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.788 [INFO][3793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.795 [INFO][3793] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.801 [INFO][3793] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.804 [INFO][3793] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.809 [INFO][3793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.809 [INFO][3793] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.812 [INFO][3793] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454 Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.817 [INFO][3793] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.826 [INFO][3793] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.1/26] block=192.168.87.0/26 handle="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.827 [INFO][3793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.1/26] handle="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.827 [INFO][3793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:11.862952 containerd[1476]: 2025-09-05 23:52:11.827 [INFO][3793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.1/26] IPv6=[] ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" HandleID="k8s-pod-network.9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.830 [INFO][3782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0", GenerateName:"whisker-5fcd84477-", Namespace:"calico-system", SelfLink:"", UID:"3e691cca-93dc-4eef-9c75-63f792f0cb2b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcd84477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"whisker-5fcd84477-qvb4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e08aa78cff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.830 [INFO][3782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.1/32] ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.830 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e08aa78cff ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.837 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.837 [INFO][3782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0", GenerateName:"whisker-5fcd84477-", Namespace:"calico-system", SelfLink:"", UID:"3e691cca-93dc-4eef-9c75-63f792f0cb2b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcd84477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454", Pod:"whisker-5fcd84477-qvb4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e08aa78cff", MAC:"fa:59:52:15:ce:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:11.864742 containerd[1476]: 2025-09-05 23:52:11.856 [INFO][3782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454" Namespace="calico-system" Pod="whisker-5fcd84477-qvb4g" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--5fcd84477--qvb4g-eth0" Sep 5 23:52:11.883961 containerd[1476]: time="2025-09-05T23:52:11.883383973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:11.883961 containerd[1476]: time="2025-09-05T23:52:11.883601574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:11.883961 containerd[1476]: time="2025-09-05T23:52:11.883639614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:11.883961 containerd[1476]: time="2025-09-05T23:52:11.883801614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:11.902796 systemd[1]: Started cri-containerd-9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454.scope - libcontainer container 9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454. Sep 5 23:52:11.938283 containerd[1476]: time="2025-09-05T23:52:11.937931080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcd84477-qvb4g,Uid:3e691cca-93dc-4eef-9c75-63f792f0cb2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454\"" Sep 5 23:52:11.941371 containerd[1476]: time="2025-09-05T23:52:11.941106646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:52:12.299961 systemd[1]: run-containerd-runc-k8s.io-a98969bb59f2211e7dba38f5ac5bb66ebe905027248faebd0dffa7409a36488a-runc.nwEXyU.mount: Deactivated successfully. Sep 5 23:52:12.557627 kernel: bpftool[3971]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:52:12.768871 systemd-networkd[1376]: vxlan.calico: Link UP Sep 5 23:52:12.768879 systemd-networkd[1376]: vxlan.calico: Gained carrier Sep 5 23:52:13.054407 kubelet[2566]: I0905 23:52:13.054182 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410fb30a-cedd-4b7f-8771-5ff3acc63254" path="/var/lib/kubelet/pods/410fb30a-cedd-4b7f-8771-5ff3acc63254/volumes" Sep 5 23:52:13.190626 systemd-networkd[1376]: cali1e08aa78cff: Gained IPv6LL Sep 5 23:52:13.544697 containerd[1476]: time="2025-09-05T23:52:13.544232457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:52:13.547808 containerd[1476]: time="2025-09-05T23:52:13.547755544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.606607338s" Sep 5 23:52:13.547808 containerd[1476]: time="2025-09-05T23:52:13.547804904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:52:13.548141 containerd[1476]: time="2025-09-05T23:52:13.547947264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:13.549161 containerd[1476]: time="2025-09-05T23:52:13.548703506Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:13.552900 containerd[1476]: time="2025-09-05T23:52:13.552850673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:13.557399 containerd[1476]: time="2025-09-05T23:52:13.557350762Z" level=info msg="CreateContainer within sandbox \"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:52:13.572267 containerd[1476]: time="2025-09-05T23:52:13.572098869Z" level=info msg="CreateContainer within sandbox \"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b81dd85cd23ef99060aabf0b2758c0f5085c9339b9a76c6e023bce1767ee8a2f\"" Sep 5 23:52:13.573651 containerd[1476]: time="2025-09-05T23:52:13.572839070Z" level=info msg="StartContainer for \"b81dd85cd23ef99060aabf0b2758c0f5085c9339b9a76c6e023bce1767ee8a2f\"" Sep 5 23:52:13.609933 systemd[1]: Started cri-containerd-b81dd85cd23ef99060aabf0b2758c0f5085c9339b9a76c6e023bce1767ee8a2f.scope - libcontainer container b81dd85cd23ef99060aabf0b2758c0f5085c9339b9a76c6e023bce1767ee8a2f. Sep 5 23:52:13.656195 containerd[1476]: time="2025-09-05T23:52:13.656140744Z" level=info msg="StartContainer for \"b81dd85cd23ef99060aabf0b2758c0f5085c9339b9a76c6e023bce1767ee8a2f\" returns successfully" Sep 5 23:52:13.660935 containerd[1476]: time="2025-09-05T23:52:13.660876912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:52:14.534199 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Sep 5 23:52:15.428354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914864642.mount: Deactivated successfully. Sep 5 23:52:15.453897 containerd[1476]: time="2025-09-05T23:52:15.453821887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:15.455196 containerd[1476]: time="2025-09-05T23:52:15.455114489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:52:15.456904 containerd[1476]: time="2025-09-05T23:52:15.456608532Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:15.460248 containerd[1476]: time="2025-09-05T23:52:15.460201818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:15.461439 containerd[1476]: time="2025-09-05T23:52:15.461299220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.800187427s" Sep 5 23:52:15.461439 containerd[1476]: time="2025-09-05T23:52:15.461336620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:52:15.474826 containerd[1476]: time="2025-09-05T23:52:15.474782724Z" level=info msg="CreateContainer within sandbox \"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:52:15.495941 containerd[1476]: time="2025-09-05T23:52:15.495892481Z" level=info msg="CreateContainer within sandbox \"9f87901ab17a952a6fcf348c223cde216ffa63a577162a506b030ef4b210d454\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e7083a791a987d9e35b444edf83e6151b92a06d5cb87c7f955e5470010bce70e\"" Sep 5 23:52:15.500574 containerd[1476]: time="2025-09-05T23:52:15.498873326Z" level=info msg="StartContainer for \"e7083a791a987d9e35b444edf83e6151b92a06d5cb87c7f955e5470010bce70e\"" Sep 5 23:52:15.539852 systemd[1]: Started cri-containerd-e7083a791a987d9e35b444edf83e6151b92a06d5cb87c7f955e5470010bce70e.scope - libcontainer container e7083a791a987d9e35b444edf83e6151b92a06d5cb87c7f955e5470010bce70e. Sep 5 23:52:15.586819 containerd[1476]: time="2025-09-05T23:52:15.586665519Z" level=info msg="StartContainer for \"e7083a791a987d9e35b444edf83e6151b92a06d5cb87c7f955e5470010bce70e\" returns successfully" Sep 5 23:52:19.051022 containerd[1476]: time="2025-09-05T23:52:19.050976464Z" level=info msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" Sep 5 23:52:19.052720 containerd[1476]: time="2025-09-05T23:52:19.051298104Z" level=info msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" Sep 5 23:52:19.054491 containerd[1476]: time="2025-09-05T23:52:19.054447829Z" level=info msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" Sep 5 23:52:19.166350 kubelet[2566]: I0905 23:52:19.166164 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5fcd84477-qvb4g" podStartSLOduration=4.63949742 podStartE2EDuration="8.166143405s" podCreationTimestamp="2025-09-05 23:52:11 +0000 UTC" firstStartedPulling="2025-09-05 23:52:11.940522445 +0000 UTC m=+43.033773839" lastFinishedPulling="2025-09-05 23:52:15.46716843 +0000 UTC m=+46.560419824" observedRunningTime="2025-09-05 23:52:16.338499899 +0000 UTC m=+47.431751293" watchObservedRunningTime="2025-09-05 23:52:19.166143405 +0000 UTC m=+50.259394799" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.165 [INFO][4189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.167 [INFO][4189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" iface="eth0" netns="/var/run/netns/cni-2a27961c-cbac-0a7a-b5af-050281e04ca1" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.167 [INFO][4189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" iface="eth0" netns="/var/run/netns/cni-2a27961c-cbac-0a7a-b5af-050281e04ca1" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" iface="eth0" netns="/var/run/netns/cni-2a27961c-cbac-0a7a-b5af-050281e04ca1" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.220 [INFO][4209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.222 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.222 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.234 [WARNING][4209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.234 [INFO][4209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.236 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.241266 containerd[1476]: 2025-09-05 23:52:19.238 [INFO][4189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:19.247138 containerd[1476]: time="2025-09-05T23:52:19.241421084Z" level=info msg="TearDown network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" successfully" Sep 5 23:52:19.247138 containerd[1476]: time="2025-09-05T23:52:19.241447844Z" level=info msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" returns successfully" Sep 5 23:52:19.247138 containerd[1476]: time="2025-09-05T23:52:19.246081131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wbmmx,Uid:42e44fef-e5ea-4ffe-8404-404c1a1ddfc1,Namespace:calico-system,Attempt:1,}" Sep 5 23:52:19.249215 systemd[1]: run-netns-cni\x2d2a27961c\x2dcbac\x2d0a7a\x2db5af\x2d050281e04ca1.mount: Deactivated successfully. Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.161 [INFO][4191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.168 [INFO][4191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" iface="eth0" netns="/var/run/netns/cni-decbeb43-a07d-91de-8061-4cd49bd8c771" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.169 [INFO][4191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" iface="eth0" netns="/var/run/netns/cni-decbeb43-a07d-91de-8061-4cd49bd8c771" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" iface="eth0" netns="/var/run/netns/cni-decbeb43-a07d-91de-8061-4cd49bd8c771" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.170 [INFO][4191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.225 [INFO][4208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.226 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.237 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.256 [WARNING][4208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.256 [INFO][4208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.261 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.270686 containerd[1476]: 2025-09-05 23:52:19.266 [INFO][4191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:19.270686 containerd[1476]: time="2025-09-05T23:52:19.271097451Z" level=info msg="TearDown network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" successfully" Sep 5 23:52:19.270686 containerd[1476]: time="2025-09-05T23:52:19.271126411Z" level=info msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" returns successfully" Sep 5 23:52:19.282378 containerd[1476]: time="2025-09-05T23:52:19.280892386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-bpnlv,Uid:c8fb7313-1f3f-4238-86b4-1214e62f55c2,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:52:19.281340 systemd[1]: run-netns-cni\x2ddecbeb43\x2da07d\x2d91de\x2d8061\x2d4cd49bd8c771.mount: Deactivated successfully. Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.185 [INFO][4188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.187 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" iface="eth0" netns="/var/run/netns/cni-5d582ea9-5be9-6e9b-d8d1-4c3ce9ad4e68" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.189 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" iface="eth0" netns="/var/run/netns/cni-5d582ea9-5be9-6e9b-d8d1-4c3ce9ad4e68" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.189 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" iface="eth0" netns="/var/run/netns/cni-5d582ea9-5be9-6e9b-d8d1-4c3ce9ad4e68" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.189 [INFO][4188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.189 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.226 [INFO][4217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.227 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.262 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.289 [WARNING][4217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.289 [INFO][4217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.295 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.308637 containerd[1476]: 2025-09-05 23:52:19.299 [INFO][4188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:19.311940 containerd[1476]: time="2025-09-05T23:52:19.309761391Z" level=info msg="TearDown network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" successfully" Sep 5 23:52:19.311940 containerd[1476]: time="2025-09-05T23:52:19.309806112Z" level=info msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" returns successfully" Sep 5 23:52:19.314866 containerd[1476]: time="2025-09-05T23:52:19.314823639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jphsd,Uid:4e4cbde1-da6c-4658-8a40-e74e4e58e9f5,Namespace:kube-system,Attempt:1,}" Sep 5 23:52:19.315837 systemd[1]: run-netns-cni\x2d5d582ea9\x2d5be9\x2d6e9b\x2dd8d1\x2d4c3ce9ad4e68.mount: Deactivated successfully. Sep 5 23:52:19.507198 systemd-networkd[1376]: cali8323365c05f: Link UP Sep 5 23:52:19.508116 systemd-networkd[1376]: cali8323365c05f: Gained carrier Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.363 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0 goldmane-54d579b49d- calico-system 42e44fef-e5ea-4ffe-8404-404c1a1ddfc1 957 0 2025-09-05 23:51:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad goldmane-54d579b49d-wbmmx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8323365c05f [] [] }} ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.364 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.413 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" HandleID="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.413 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" HandleID="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"goldmane-54d579b49d-wbmmx", "timestamp":"2025-09-05 23:52:19.413065314 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.413 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.413 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.413 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.430 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.440 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.454 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.459 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.462 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.462 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.467 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.475 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.2/26] block=192.168.87.0/26 handle="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.2/26] handle="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.529178 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.2/26] IPv6=[] ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" HandleID="k8s-pod-network.945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.492 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"goldmane-54d579b49d-wbmmx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8323365c05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.494 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.2/32] ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.494 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8323365c05f ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.507 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.509 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a", Pod:"goldmane-54d579b49d-wbmmx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8323365c05f", MAC:"6e:98:74:84:0c:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.534307 containerd[1476]: 2025-09-05 23:52:19.525 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a" Namespace="calico-system" Pod="goldmane-54d579b49d-wbmmx" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:19.573408 containerd[1476]: time="2025-09-05T23:52:19.573057126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:19.573408 containerd[1476]: time="2025-09-05T23:52:19.573276727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:19.573408 containerd[1476]: time="2025-09-05T23:52:19.573312727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.573850 containerd[1476]: time="2025-09-05T23:52:19.573459647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.610648 systemd[1]: Started cri-containerd-945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a.scope - libcontainer container 945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a. Sep 5 23:52:19.629351 systemd-networkd[1376]: cali3c26e2e82ca: Link UP Sep 5 23:52:19.634458 systemd-networkd[1376]: cali3c26e2e82ca: Gained carrier Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.380 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0 calico-apiserver-6fdcb9f565- calico-apiserver c8fb7313-1f3f-4238-86b4-1214e62f55c2 956 0 2025-09-05 23:51:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fdcb9f565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad calico-apiserver-6fdcb9f565-bpnlv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c26e2e82ca [] [] }} ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.381 [INFO][4239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.445 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" HandleID="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.446 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" HandleID="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"calico-apiserver-6fdcb9f565-bpnlv", "timestamp":"2025-09-05 23:52:19.445709406 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.446 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.488 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.538 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.550 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.564 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.568 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.573 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.573 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.577 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328 Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.585 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.614 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.3/26] block=192.168.87.0/26 handle="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.614 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.3/26] handle="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.614 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.668529 containerd[1476]: 2025-09-05 23:52:19.614 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.3/26] IPv6=[] ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" HandleID="k8s-pod-network.843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.618 [INFO][4239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8fb7313-1f3f-4238-86b4-1214e62f55c2", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"calico-apiserver-6fdcb9f565-bpnlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c26e2e82ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.618 [INFO][4239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.3/32] ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.618 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c26e2e82ca ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.642 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.643 [INFO][4239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8fb7313-1f3f-4238-86b4-1214e62f55c2", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328", Pod:"calico-apiserver-6fdcb9f565-bpnlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c26e2e82ca", MAC:"0e:7d:91:b9:39:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.669430 containerd[1476]: 2025-09-05 23:52:19.663 [INFO][4239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-bpnlv" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:19.726372 systemd-networkd[1376]: cali76aa7fe2584: Link UP Sep 5 23:52:19.727190 systemd-networkd[1376]: cali76aa7fe2584: Gained carrier Sep 5 23:52:19.732818 containerd[1476]: time="2025-09-05T23:52:19.732663818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:19.732818 containerd[1476]: time="2025-09-05T23:52:19.732777378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:19.733278 containerd[1476]: time="2025-09-05T23:52:19.732982819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.734693 containerd[1476]: time="2025-09-05T23:52:19.734353901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.741336 containerd[1476]: time="2025-09-05T23:52:19.741273792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wbmmx,Uid:42e44fef-e5ea-4ffe-8404-404c1a1ddfc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a\"" Sep 5 23:52:19.745997 containerd[1476]: time="2025-09-05T23:52:19.745731279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:52:19.757828 systemd[1]: Started cri-containerd-843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328.scope - libcontainer container 843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328. Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.452 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0 coredns-668d6bf9bc- kube-system 4e4cbde1-da6c-4658-8a40-e74e4e58e9f5 958 0 2025-09-05 23:51:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad coredns-668d6bf9bc-jphsd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali76aa7fe2584 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.452 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.493 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" HandleID="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.494 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" HandleID="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002abf40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"coredns-668d6bf9bc-jphsd", "timestamp":"2025-09-05 23:52:19.493748241 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.494 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.616 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.616 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.642 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.650 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.663 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.669 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.676 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.676 [INFO][4281] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.681 [INFO][4281] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588 Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.694 [INFO][4281] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.703 [INFO][4281] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.4/26] block=192.168.87.0/26 handle="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.704 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.4/26] handle="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.704 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:19.763727 containerd[1476]: 2025-09-05 23:52:19.704 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.4/26] IPv6=[] ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" HandleID="k8s-pod-network.6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.709 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"coredns-668d6bf9bc-jphsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76aa7fe2584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.710 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.4/32] ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.710 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76aa7fe2584 ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.726 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.726 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588", Pod:"coredns-668d6bf9bc-jphsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76aa7fe2584", MAC:"1e:92:2d:f2:8e:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:19.765111 containerd[1476]: 2025-09-05 23:52:19.755 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jphsd" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:19.799906 containerd[1476]: time="2025-09-05T23:52:19.799801244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:19.799906 containerd[1476]: time="2025-09-05T23:52:19.799860204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:19.800384 containerd[1476]: time="2025-09-05T23:52:19.799885484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.800384 containerd[1476]: time="2025-09-05T23:52:19.800333285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:19.826788 systemd[1]: Started cri-containerd-6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588.scope - libcontainer container 6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588. Sep 5 23:52:19.839433 containerd[1476]: time="2025-09-05T23:52:19.838375705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-bpnlv,Uid:c8fb7313-1f3f-4238-86b4-1214e62f55c2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328\"" Sep 5 23:52:19.880135 containerd[1476]: time="2025-09-05T23:52:19.880064130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jphsd,Uid:4e4cbde1-da6c-4658-8a40-e74e4e58e9f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588\"" Sep 5 23:52:19.886189 containerd[1476]: time="2025-09-05T23:52:19.885748339Z" level=info msg="CreateContainer within sandbox \"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:19.905086 containerd[1476]: time="2025-09-05T23:52:19.903413247Z" level=info msg="CreateContainer within sandbox \"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"201695f29dd0c565f31270416b445d0f57833def19178ee5e66097f5ad7481d8\"" Sep 5 23:52:19.905622 containerd[1476]: time="2025-09-05T23:52:19.905492970Z" level=info msg="StartContainer for \"201695f29dd0c565f31270416b445d0f57833def19178ee5e66097f5ad7481d8\"" Sep 5 23:52:19.951532 systemd[1]: Started cri-containerd-201695f29dd0c565f31270416b445d0f57833def19178ee5e66097f5ad7481d8.scope - libcontainer container 201695f29dd0c565f31270416b445d0f57833def19178ee5e66097f5ad7481d8. Sep 5 23:52:19.992856 containerd[1476]: time="2025-09-05T23:52:19.992748108Z" level=info msg="StartContainer for \"201695f29dd0c565f31270416b445d0f57833def19178ee5e66097f5ad7481d8\" returns successfully" Sep 5 23:52:20.356941 kubelet[2566]: I0905 23:52:20.355968 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jphsd" podStartSLOduration=46.355948026 podStartE2EDuration="46.355948026s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:20.339190441 +0000 UTC m=+51.432441875" watchObservedRunningTime="2025-09-05 23:52:20.355948026 +0000 UTC m=+51.449199420" Sep 5 23:52:20.869835 systemd-networkd[1376]: cali3c26e2e82ca: Gained IPv6LL Sep 5 23:52:20.933884 systemd-networkd[1376]: cali8323365c05f: Gained IPv6LL Sep 5 23:52:20.934387 systemd-networkd[1376]: cali76aa7fe2584: Gained IPv6LL Sep 5 23:52:21.064059 containerd[1476]: time="2025-09-05T23:52:21.063800111Z" level=info msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" Sep 5 23:52:21.065114 containerd[1476]: time="2025-09-05T23:52:21.064158352Z" level=info msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" Sep 5 23:52:21.068065 containerd[1476]: time="2025-09-05T23:52:21.067395597Z" level=info msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.153 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.155 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" iface="eth0" netns="/var/run/netns/cni-921ba084-b358-67da-c034-e851eecea6f5" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.156 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" iface="eth0" netns="/var/run/netns/cni-921ba084-b358-67da-c034-e851eecea6f5" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.161 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" iface="eth0" netns="/var/run/netns/cni-921ba084-b358-67da-c034-e851eecea6f5" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.161 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.161 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.245 [INFO][4534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.246 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.246 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.260 [WARNING][4534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.260 [INFO][4534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.263 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.274041 containerd[1476]: 2025-09-05 23:52:21.268 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:21.279134 containerd[1476]: time="2025-09-05T23:52:21.277622472Z" level=info msg="TearDown network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" successfully" Sep 5 23:52:21.279134 containerd[1476]: time="2025-09-05T23:52:21.277665392Z" level=info msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" returns successfully" Sep 5 23:52:21.280067 systemd[1]: run-netns-cni\x2d921ba084\x2db358\x2d67da\x2dc034\x2de851eecea6f5.mount: Deactivated successfully. Sep 5 23:52:21.285769 containerd[1476]: time="2025-09-05T23:52:21.283158680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58495b-ch6j7,Uid:08ac5769-ff1d-4260-a1f9-1ac84156590f,Namespace:calico-system,Attempt:1,}" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.212 [INFO][4504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.213 [INFO][4504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" iface="eth0" netns="/var/run/netns/cni-8fc915fd-8567-f223-ba1e-690b15a41558" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.215 [INFO][4504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" iface="eth0" netns="/var/run/netns/cni-8fc915fd-8567-f223-ba1e-690b15a41558" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.217 [INFO][4504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" iface="eth0" netns="/var/run/netns/cni-8fc915fd-8567-f223-ba1e-690b15a41558" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.217 [INFO][4504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.217 [INFO][4504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.310 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.310 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.311 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.338 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.338 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.341 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.349932 containerd[1476]: 2025-09-05 23:52:21.347 [INFO][4504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:21.351015 containerd[1476]: time="2025-09-05T23:52:21.350983221Z" level=info msg="TearDown network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" successfully" Sep 5 23:52:21.351171 containerd[1476]: time="2025-09-05T23:52:21.351097942Z" level=info msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" returns successfully" Sep 5 23:52:21.356164 systemd[1]: run-netns-cni\x2d8fc915fd\x2d8567\x2df223\x2dba1e\x2d690b15a41558.mount: Deactivated successfully. Sep 5 23:52:21.357598 containerd[1476]: time="2025-09-05T23:52:21.356483910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-lqnhs,Uid:ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.210 [INFO][4507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.210 [INFO][4507] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" iface="eth0" netns="/var/run/netns/cni-fa9f4738-b7e5-c293-c0cb-67d23b2eaed4" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.211 [INFO][4507] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" iface="eth0" netns="/var/run/netns/cni-fa9f4738-b7e5-c293-c0cb-67d23b2eaed4" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.212 [INFO][4507] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" iface="eth0" netns="/var/run/netns/cni-fa9f4738-b7e5-c293-c0cb-67d23b2eaed4" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.212 [INFO][4507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.212 [INFO][4507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.322 [INFO][4540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.322 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.341 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.365 [WARNING][4540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.365 [INFO][4540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.371 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.388961 containerd[1476]: 2025-09-05 23:52:21.382 [INFO][4507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:21.389736 containerd[1476]: time="2025-09-05T23:52:21.389449919Z" level=info msg="TearDown network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" successfully" Sep 5 23:52:21.389736 containerd[1476]: time="2025-09-05T23:52:21.389479039Z" level=info msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" returns successfully" Sep 5 23:52:21.390979 containerd[1476]: time="2025-09-05T23:52:21.390816601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tlkz,Uid:7f45c648-2314-4d67-9e44-04067f334c76,Namespace:calico-system,Attempt:1,}" Sep 5 23:52:21.617174 systemd-networkd[1376]: caliefadc3171ea: Link UP Sep 5 23:52:21.622317 systemd-networkd[1376]: caliefadc3171ea: Gained carrier Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.421 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0 calico-kube-controllers-68d58495b- calico-system 08ac5769-ff1d-4260-a1f9-1ac84156590f 991 0 2025-09-05 23:51:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68d58495b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad calico-kube-controllers-68d58495b-ch6j7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefadc3171ea [] [] }} ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.424 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.515 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" HandleID="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.515 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" HandleID="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"calico-kube-controllers-68d58495b-ch6j7", "timestamp":"2025-09-05 23:52:21.515258748 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.515 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.515 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.515 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.534 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.546 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.558 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.562 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.567 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.567 [INFO][4590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.574 [INFO][4590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3 Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.583 [INFO][4590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.595 [INFO][4590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.5/26] block=192.168.87.0/26 handle="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.599 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.5/26] handle="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.599 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.657333 containerd[1476]: 2025-09-05 23:52:21.599 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.5/26] IPv6=[] ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" HandleID="k8s-pod-network.8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.606 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0", GenerateName:"calico-kube-controllers-68d58495b-", Namespace:"calico-system", SelfLink:"", UID:"08ac5769-ff1d-4260-a1f9-1ac84156590f", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58495b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"calico-kube-controllers-68d58495b-ch6j7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefadc3171ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.607 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.5/32] ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.607 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefadc3171ea ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.626 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.627 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0", GenerateName:"calico-kube-controllers-68d58495b-", Namespace:"calico-system", SelfLink:"", UID:"08ac5769-ff1d-4260-a1f9-1ac84156590f", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58495b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3", Pod:"calico-kube-controllers-68d58495b-ch6j7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefadc3171ea", MAC:"92:b0:6d:27:1b:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.660145 containerd[1476]: 2025-09-05 23:52:21.654 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3" Namespace="calico-system" Pod="calico-kube-controllers-68d58495b-ch6j7" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:21.725795 containerd[1476]: time="2025-09-05T23:52:21.722406778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:21.725795 containerd[1476]: time="2025-09-05T23:52:21.724625861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:21.725795 containerd[1476]: time="2025-09-05T23:52:21.724849621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:21.726000 containerd[1476]: time="2025-09-05T23:52:21.725776463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:21.754473 systemd-networkd[1376]: cali5db7cd23658: Link UP Sep 5 23:52:21.756040 systemd-networkd[1376]: cali5db7cd23658: Gained carrier Sep 5 23:52:21.776854 systemd[1]: Started cri-containerd-8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3.scope - libcontainer container 8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3. Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.465 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0 calico-apiserver-6fdcb9f565- calico-apiserver ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5 993 0 2025-09-05 23:51:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fdcb9f565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad calico-apiserver-6fdcb9f565-lqnhs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5db7cd23658 [] [] }} ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.465 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.571 [INFO][4595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" HandleID="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.571 [INFO][4595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" HandleID="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"calico-apiserver-6fdcb9f565-lqnhs", "timestamp":"2025-09-05 23:52:21.571666232 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.571 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.602 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.603 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.634 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.649 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.679 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.686 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.694 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.694 [INFO][4595] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.698 [INFO][4595] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75 Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.711 [INFO][4595] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.736 [INFO][4595] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.6/26] block=192.168.87.0/26 handle="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.736 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.6/26] handle="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.737 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.797749 containerd[1476]: 2025-09-05 23:52:21.737 [INFO][4595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.6/26] IPv6=[] ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" HandleID="k8s-pod-network.fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.744 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"calico-apiserver-6fdcb9f565-lqnhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5db7cd23658", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.745 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.6/32] ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.745 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5db7cd23658 ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.768 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.770 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75", Pod:"calico-apiserver-6fdcb9f565-lqnhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5db7cd23658", MAC:"3a:04:7c:dd:6b:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.799130 containerd[1476]: 2025-09-05 23:52:21.792 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75" Namespace="calico-apiserver" Pod="calico-apiserver-6fdcb9f565-lqnhs" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:21.889987 systemd-networkd[1376]: cali8f86651b8bd: Link UP Sep 5 23:52:21.890144 systemd-networkd[1376]: cali8f86651b8bd: Gained carrier Sep 5 23:52:21.899181 containerd[1476]: time="2025-09-05T23:52:21.897115119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:21.899181 containerd[1476]: time="2025-09-05T23:52:21.897180280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:21.899181 containerd[1476]: time="2025-09-05T23:52:21.897196320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:21.899181 containerd[1476]: time="2025-09-05T23:52:21.897276080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.524 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0 csi-node-driver- calico-system 7f45c648-2314-4d67-9e44-04067f334c76 992 0 2025-09-05 23:51:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad csi-node-driver-6tlkz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f86651b8bd [] [] }} ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.525 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.610 [INFO][4605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" HandleID="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.610 [INFO][4605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" HandleID="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bb90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"csi-node-driver-6tlkz", "timestamp":"2025-09-05 23:52:21.61041733 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.612 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.737 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.737 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.758 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.780 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.798 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.804 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.810 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.810 [INFO][4605] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.816 [INFO][4605] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7 Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.827 [INFO][4605] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.843 [INFO][4605] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.7/26] block=192.168.87.0/26 handle="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.843 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.7/26] handle="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.843 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:21.927351 containerd[1476]: 2025-09-05 23:52:21.843 [INFO][4605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.7/26] IPv6=[] ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" HandleID="k8s-pod-network.e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.863 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f45c648-2314-4d67-9e44-04067f334c76", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"csi-node-driver-6tlkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f86651b8bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.863 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.7/32] ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.864 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f86651b8bd ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.888 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.893 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f45c648-2314-4d67-9e44-04067f334c76", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7", Pod:"csi-node-driver-6tlkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f86651b8bd", MAC:"d6:53:96:53:65:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:21.927975 containerd[1476]: 2025-09-05 23:52:21.920 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7" Namespace="calico-system" Pod="csi-node-driver-6tlkz" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:21.953270 containerd[1476]: time="2025-09-05T23:52:21.953232963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58495b-ch6j7,Uid:08ac5769-ff1d-4260-a1f9-1ac84156590f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3\"" Sep 5 23:52:21.979784 systemd[1]: Started cri-containerd-fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75.scope - libcontainer container fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75. Sep 5 23:52:21.980686 containerd[1476]: time="2025-09-05T23:52:21.978475561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:21.980686 containerd[1476]: time="2025-09-05T23:52:21.978877482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:21.980686 containerd[1476]: time="2025-09-05T23:52:21.978923122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:21.980686 containerd[1476]: time="2025-09-05T23:52:21.979048482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:22.023991 systemd[1]: Started cri-containerd-e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7.scope - libcontainer container e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7. Sep 5 23:52:22.051810 containerd[1476]: time="2025-09-05T23:52:22.050986868Z" level=info msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" Sep 5 23:52:22.098025 containerd[1476]: time="2025-09-05T23:52:22.097974697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdcb9f565-lqnhs,Uid:ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75\"" Sep 5 23:52:22.101443 containerd[1476]: time="2025-09-05T23:52:22.100385340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tlkz,Uid:7f45c648-2314-4d67-9e44-04067f334c76,Namespace:calico-system,Attempt:1,} returns sandbox id \"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7\"" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.153 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.154 [INFO][4762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" iface="eth0" netns="/var/run/netns/cni-3d892b63-1912-9d6e-8f4b-80919b4ee9ba" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.156 [INFO][4762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" iface="eth0" netns="/var/run/netns/cni-3d892b63-1912-9d6e-8f4b-80919b4ee9ba" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.156 [INFO][4762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" iface="eth0" netns="/var/run/netns/cni-3d892b63-1912-9d6e-8f4b-80919b4ee9ba" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.156 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.156 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.184 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.184 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.184 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.196 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.196 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.201 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:22.206445 containerd[1476]: 2025-09-05 23:52:22.203 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:22.207709 containerd[1476]: time="2025-09-05T23:52:22.207645337Z" level=info msg="TearDown network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" successfully" Sep 5 23:52:22.208061 containerd[1476]: time="2025-09-05T23:52:22.208027097Z" level=info msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" returns successfully" Sep 5 23:52:22.209715 containerd[1476]: time="2025-09-05T23:52:22.209015299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xstw4,Uid:79bda304-0fec-4e72-8abd-1ec79680ee8b,Namespace:kube-system,Attempt:1,}" Sep 5 23:52:22.296049 systemd[1]: run-netns-cni\x2dfa9f4738\x2db7e5\x2dc293\x2dc0cb\x2d67d23b2eaed4.mount: Deactivated successfully. Sep 5 23:52:22.296242 systemd[1]: run-netns-cni\x2d3d892b63\x2d1912\x2d9d6e\x2d8f4b\x2d80919b4ee9ba.mount: Deactivated successfully. Sep 5 23:52:22.445134 systemd-networkd[1376]: calife71bd35071: Link UP Sep 5 23:52:22.445922 systemd-networkd[1376]: calife71bd35071: Gained carrier Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.301 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0 coredns-668d6bf9bc- kube-system 79bda304-0fec-4e72-8abd-1ec79680ee8b 1007 0 2025-09-05 23:51:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-2b989ca6ad coredns-668d6bf9bc-xstw4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife71bd35071 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.301 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.351 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" HandleID="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.352 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" HandleID="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-2b989ca6ad", "pod":"coredns-668d6bf9bc-xstw4", "timestamp":"2025-09-05 23:52:22.351935667 +0000 UTC"}, Hostname:"ci-4081-3-5-n-2b989ca6ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.352 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.352 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.352 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-2b989ca6ad' Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.375 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.385 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.395 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.399 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.405 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.405 [INFO][4799] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.408 [INFO][4799] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.418 [INFO][4799] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.434 [INFO][4799] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.8/26] block=192.168.87.0/26 handle="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.435 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.8/26] handle="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" host="ci-4081-3-5-n-2b989ca6ad" Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.435 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:22.473383 containerd[1476]: 2025-09-05 23:52:22.435 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.8/26] IPv6=[] ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" HandleID="k8s-pod-network.41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.441 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79bda304-0fec-4e72-8abd-1ec79680ee8b", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"", Pod:"coredns-668d6bf9bc-xstw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife71bd35071", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.441 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.8/32] ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.441 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife71bd35071 ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.447 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.448 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79bda304-0fec-4e72-8abd-1ec79680ee8b", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd", Pod:"coredns-668d6bf9bc-xstw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife71bd35071", MAC:"0e:58:d8:3b:24:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:22.474373 containerd[1476]: 2025-09-05 23:52:22.467 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xstw4" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:22.491393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101851954.mount: Deactivated successfully. Sep 5 23:52:22.510080 containerd[1476]: time="2025-09-05T23:52:22.509656298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:22.510080 containerd[1476]: time="2025-09-05T23:52:22.509728138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:22.510080 containerd[1476]: time="2025-09-05T23:52:22.509740778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:22.510080 containerd[1476]: time="2025-09-05T23:52:22.509826498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:22.547878 systemd[1]: Started cri-containerd-41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd.scope - libcontainer container 41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd. Sep 5 23:52:22.584303 containerd[1476]: time="2025-09-05T23:52:22.584261407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xstw4,Uid:79bda304-0fec-4e72-8abd-1ec79680ee8b,Namespace:kube-system,Attempt:1,} returns sandbox id \"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd\"" Sep 5 23:52:22.589279 containerd[1476]: time="2025-09-05T23:52:22.588794853Z" level=info msg="CreateContainer within sandbox \"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:22.608273 containerd[1476]: time="2025-09-05T23:52:22.608225842Z" level=info msg="CreateContainer within sandbox \"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecf76ddba19f2e85dd968c70b07a82297c8ab894750880daf1653d87250e63a3\"" Sep 5 23:52:22.610675 containerd[1476]: time="2025-09-05T23:52:22.610634605Z" level=info msg="StartContainer for \"ecf76ddba19f2e85dd968c70b07a82297c8ab894750880daf1653d87250e63a3\"" Sep 5 23:52:22.638933 systemd[1]: Started cri-containerd-ecf76ddba19f2e85dd968c70b07a82297c8ab894750880daf1653d87250e63a3.scope - libcontainer container ecf76ddba19f2e85dd968c70b07a82297c8ab894750880daf1653d87250e63a3. Sep 5 23:52:22.671872 containerd[1476]: time="2025-09-05T23:52:22.670424853Z" level=info msg="StartContainer for \"ecf76ddba19f2e85dd968c70b07a82297c8ab894750880daf1653d87250e63a3\" returns successfully" Sep 5 23:52:23.050139 systemd-networkd[1376]: caliefadc3171ea: Gained IPv6LL Sep 5 23:52:23.109723 systemd-networkd[1376]: cali5db7cd23658: Gained IPv6LL Sep 5 23:52:23.448463 containerd[1476]: time="2025-09-05T23:52:23.447520371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.449106 containerd[1476]: time="2025-09-05T23:52:23.449072693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:52:23.449602 containerd[1476]: time="2025-09-05T23:52:23.449568854Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.453735 containerd[1476]: time="2025-09-05T23:52:23.453691220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.454852 containerd[1476]: time="2025-09-05T23:52:23.454784342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.709005783s" Sep 5 23:52:23.459689 containerd[1476]: time="2025-09-05T23:52:23.459612188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:52:23.462190 containerd[1476]: time="2025-09-05T23:52:23.461885192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:52:23.464069 containerd[1476]: time="2025-09-05T23:52:23.464021315Z" level=info msg="CreateContainer within sandbox \"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:52:23.484382 containerd[1476]: time="2025-09-05T23:52:23.484297424Z" level=info msg="CreateContainer within sandbox \"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fdd8926bb949f6251093cd4dd1a69fb50078c52361c1064b83f0b08bc6abfcd5\"" Sep 5 23:52:23.487761 containerd[1476]: time="2025-09-05T23:52:23.485499145Z" level=info msg="StartContainer for \"fdd8926bb949f6251093cd4dd1a69fb50078c52361c1064b83f0b08bc6abfcd5\"" Sep 5 23:52:23.493935 systemd-networkd[1376]: cali8f86651b8bd: Gained IPv6LL Sep 5 23:52:23.525794 systemd[1]: Started cri-containerd-fdd8926bb949f6251093cd4dd1a69fb50078c52361c1064b83f0b08bc6abfcd5.scope - libcontainer container fdd8926bb949f6251093cd4dd1a69fb50078c52361c1064b83f0b08bc6abfcd5. Sep 5 23:52:23.570810 containerd[1476]: time="2025-09-05T23:52:23.570740507Z" level=info msg="StartContainer for \"fdd8926bb949f6251093cd4dd1a69fb50078c52361c1064b83f0b08bc6abfcd5\" returns successfully" Sep 5 23:52:24.133958 systemd-networkd[1376]: calife71bd35071: Gained IPv6LL Sep 5 23:52:24.380818 kubelet[2566]: I0905 23:52:24.380715 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xstw4" podStartSLOduration=50.380689687 podStartE2EDuration="50.380689687s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:23.373531746 +0000 UTC m=+54.466783140" watchObservedRunningTime="2025-09-05 23:52:24.380689687 +0000 UTC m=+55.473941121" Sep 5 23:52:26.943576 containerd[1476]: time="2025-09-05T23:52:26.942516428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:26.944091 containerd[1476]: time="2025-09-05T23:52:26.943840870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:52:26.944708 containerd[1476]: time="2025-09-05T23:52:26.944664031Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:26.947706 containerd[1476]: time="2025-09-05T23:52:26.947630595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:26.948626 containerd[1476]: time="2025-09-05T23:52:26.948591716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.486652164s" Sep 5 23:52:26.948736 containerd[1476]: time="2025-09-05T23:52:26.948719156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:52:26.950594 containerd[1476]: time="2025-09-05T23:52:26.950512559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:52:26.953382 containerd[1476]: time="2025-09-05T23:52:26.953346482Z" level=info msg="CreateContainer within sandbox \"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:52:26.971042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074959121.mount: Deactivated successfully. Sep 5 23:52:26.973269 containerd[1476]: time="2025-09-05T23:52:26.973213109Z" level=info msg="CreateContainer within sandbox \"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ce6db2205ccf4c7d481314d57cda2955b256548c3951c6a60830da645ade1bd\"" Sep 5 23:52:26.973852 containerd[1476]: time="2025-09-05T23:52:26.973812389Z" level=info msg="StartContainer for \"0ce6db2205ccf4c7d481314d57cda2955b256548c3951c6a60830da645ade1bd\"" Sep 5 23:52:27.015885 systemd[1]: Started cri-containerd-0ce6db2205ccf4c7d481314d57cda2955b256548c3951c6a60830da645ade1bd.scope - libcontainer container 0ce6db2205ccf4c7d481314d57cda2955b256548c3951c6a60830da645ade1bd. Sep 5 23:52:27.064654 containerd[1476]: time="2025-09-05T23:52:27.064496867Z" level=info msg="StartContainer for \"0ce6db2205ccf4c7d481314d57cda2955b256548c3951c6a60830da645ade1bd\" returns successfully" Sep 5 23:52:27.403294 kubelet[2566]: I0905 23:52:27.403210 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-wbmmx" podStartSLOduration=24.687460071 podStartE2EDuration="28.403186544s" podCreationTimestamp="2025-09-05 23:51:59 +0000 UTC" firstStartedPulling="2025-09-05 23:52:19.744975037 +0000 UTC m=+50.838226431" lastFinishedPulling="2025-09-05 23:52:23.46070147 +0000 UTC m=+54.553952904" observedRunningTime="2025-09-05 23:52:24.381137327 +0000 UTC m=+55.474388761" watchObservedRunningTime="2025-09-05 23:52:27.403186544 +0000 UTC m=+58.496437938" Sep 5 23:52:29.028679 containerd[1476]: time="2025-09-05T23:52:29.028621089Z" level=info msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.176 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a", Pod:"goldmane-54d579b49d-wbmmx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8323365c05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.176 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.176 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" iface="eth0" netns="" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.176 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.176 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.233 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.233 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.233 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.257 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.257 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.261 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:29.268511 containerd[1476]: 2025-09-05 23:52:29.265 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.268511 containerd[1476]: time="2025-09-05T23:52:29.268402063Z" level=info msg="TearDown network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" successfully" Sep 5 23:52:29.268511 containerd[1476]: time="2025-09-05T23:52:29.268456423Z" level=info msg="StopPodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" returns successfully" Sep 5 23:52:29.271570 containerd[1476]: time="2025-09-05T23:52:29.271324747Z" level=info msg="RemovePodSandbox for \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" Sep 5 23:52:29.286331 containerd[1476]: time="2025-09-05T23:52:29.286050205Z" level=info msg="Forcibly stopping sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\"" Sep 5 23:52:29.397998 kubelet[2566]: I0905 23:52:29.397604 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:52:29.486571 containerd[1476]: time="2025-09-05T23:52:29.485370890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:29.488676 containerd[1476]: time="2025-09-05T23:52:29.488606254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:52:29.491375 containerd[1476]: time="2025-09-05T23:52:29.491328697Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:29.500121 containerd[1476]: time="2025-09-05T23:52:29.500060828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:29.509812 containerd[1476]: time="2025-09-05T23:52:29.509760920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.558579681s" Sep 5 23:52:29.510622 containerd[1476]: time="2025-09-05T23:52:29.510589001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:52:29.516383 containerd[1476]: time="2025-09-05T23:52:29.516027048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:52:29.537320 containerd[1476]: time="2025-09-05T23:52:29.537203554Z" level=info msg="CreateContainer within sandbox \"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.419 [WARNING][5078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"42e44fef-e5ea-4ffe-8404-404c1a1ddfc1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"945103ea6baac5d9f9436319f8354e4dad3038727c9cf55caf4060ba175e624a", Pod:"goldmane-54d579b49d-wbmmx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8323365c05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.419 [INFO][5078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.419 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" iface="eth0" netns="" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.419 [INFO][5078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.419 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.493 [INFO][5088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.494 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.494 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.520 [WARNING][5088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.520 [INFO][5088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" HandleID="k8s-pod-network.426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-goldmane--54d579b49d--wbmmx-eth0" Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.541 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:29.554108 containerd[1476]: 2025-09-05 23:52:29.545 [INFO][5078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53" Sep 5 23:52:29.554615 containerd[1476]: time="2025-09-05T23:52:29.554146415Z" level=info msg="TearDown network for sandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" successfully" Sep 5 23:52:29.568167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486764529.mount: Deactivated successfully. Sep 5 23:52:29.572422 containerd[1476]: time="2025-09-05T23:52:29.571816396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:29.572422 containerd[1476]: time="2025-09-05T23:52:29.571898956Z" level=info msg="RemovePodSandbox \"426b9f9c519762757873196dc812c1bc0ef7d51ee13b4e85452753b306003c53\" returns successfully" Sep 5 23:52:29.573123 containerd[1476]: time="2025-09-05T23:52:29.572815597Z" level=info msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" Sep 5 23:52:29.576238 containerd[1476]: time="2025-09-05T23:52:29.576190602Z" level=info msg="CreateContainer within sandbox \"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f\"" Sep 5 23:52:29.577806 containerd[1476]: time="2025-09-05T23:52:29.577407883Z" level=info msg="StartContainer for \"f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f\"" Sep 5 23:52:29.688221 systemd[1]: Started cri-containerd-f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f.scope - libcontainer container f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f. Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.707 [WARNING][5106] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75", Pod:"calico-apiserver-6fdcb9f565-lqnhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5db7cd23658", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.707 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.707 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" iface="eth0" netns="" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.707 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.707 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.752 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.752 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.753 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.765 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.765 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.767 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:29.773018 containerd[1476]: 2025-09-05 23:52:29.770 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.773018 containerd[1476]: time="2025-09-05T23:52:29.772882843Z" level=info msg="TearDown network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" successfully" Sep 5 23:52:29.773018 containerd[1476]: time="2025-09-05T23:52:29.772907003Z" level=info msg="StopPodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" returns successfully" Sep 5 23:52:29.775893 containerd[1476]: time="2025-09-05T23:52:29.775472447Z" level=info msg="RemovePodSandbox for \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" Sep 5 23:52:29.775893 containerd[1476]: time="2025-09-05T23:52:29.775520607Z" level=info msg="Forcibly stopping sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\"" Sep 5 23:52:29.789945 containerd[1476]: time="2025-09-05T23:52:29.789744504Z" level=info msg="StartContainer for \"f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f\" returns successfully" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.833 [WARNING][5160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea42d30b-f6e9-471f-b9c8-2a775dbbc9d5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75", Pod:"calico-apiserver-6fdcb9f565-lqnhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5db7cd23658", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.834 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.834 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" iface="eth0" netns="" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.836 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.836 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.862 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.863 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.863 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.882 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.882 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" HandleID="k8s-pod-network.3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--lqnhs-eth0" Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.884 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:29.888257 containerd[1476]: 2025-09-05 23:52:29.886 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e" Sep 5 23:52:29.888799 containerd[1476]: time="2025-09-05T23:52:29.888467305Z" level=info msg="TearDown network for sandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" successfully" Sep 5 23:52:29.896483 containerd[1476]: time="2025-09-05T23:52:29.896219715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:29.896483 containerd[1476]: time="2025-09-05T23:52:29.896296315Z" level=info msg="RemovePodSandbox \"3339505cbbc7b2799ca27313779ac6a1f4d44ae1cf655356487afdce130f4f9e\" returns successfully" Sep 5 23:52:29.897089 containerd[1476]: time="2025-09-05T23:52:29.897054716Z" level=info msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" Sep 5 23:52:29.901659 kubelet[2566]: I0905 23:52:29.901581 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fdcb9f565-bpnlv" podStartSLOduration=28.792591032 podStartE2EDuration="35.901564442s" podCreationTimestamp="2025-09-05 23:51:54 +0000 UTC" firstStartedPulling="2025-09-05 23:52:19.840706348 +0000 UTC m=+50.933957742" lastFinishedPulling="2025-09-05 23:52:26.949679678 +0000 UTC m=+58.042931152" observedRunningTime="2025-09-05 23:52:27.402408583 +0000 UTC m=+58.495659977" watchObservedRunningTime="2025-09-05 23:52:29.901564442 +0000 UTC m=+60.994815836" Sep 5 23:52:29.914599 containerd[1476]: time="2025-09-05T23:52:29.914553538Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:29.918187 containerd[1476]: time="2025-09-05T23:52:29.916772580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:52:29.925324 containerd[1476]: time="2025-09-05T23:52:29.925097551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 409.017863ms" Sep 5 23:52:29.925702 containerd[1476]: time="2025-09-05T23:52:29.925656471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:52:29.929073 containerd[1476]: time="2025-09-05T23:52:29.929026075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:52:29.932742 containerd[1476]: time="2025-09-05T23:52:29.932659960Z" level=info msg="CreateContainer within sandbox \"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:52:29.979907 containerd[1476]: time="2025-09-05T23:52:29.979563457Z" level=info msg="CreateContainer within sandbox \"fe982d42dec041aa1ea6a19ddedb68bb4155497cf3b25cc0ce7d864c1d2bed75\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48716596d38090e4a14e9528bfd6736403f72f1335dcff4b0979284b9b8b602b\"" Sep 5 23:52:29.982158 containerd[1476]: time="2025-09-05T23:52:29.980954379Z" level=info msg="StartContainer for \"48716596d38090e4a14e9528bfd6736403f72f1335dcff4b0979284b9b8b602b\"" Sep 5 23:52:30.051241 systemd[1]: Started cri-containerd-48716596d38090e4a14e9528bfd6736403f72f1335dcff4b0979284b9b8b602b.scope - libcontainer container 48716596d38090e4a14e9528bfd6736403f72f1335dcff4b0979284b9b8b602b. Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:29.999 [WARNING][5190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79bda304-0fec-4e72-8abd-1ec79680ee8b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd", Pod:"coredns-668d6bf9bc-xstw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife71bd35071", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.000 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.000 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" iface="eth0" netns="" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.000 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.000 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.080 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.080 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.080 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.094 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.094 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.098 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.105276 containerd[1476]: 2025-09-05 23:52:30.102 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.107122 containerd[1476]: time="2025-09-05T23:52:30.105181929Z" level=info msg="TearDown network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" successfully" Sep 5 23:52:30.107122 containerd[1476]: time="2025-09-05T23:52:30.106240210Z" level=info msg="StopPodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" returns successfully" Sep 5 23:52:30.109273 containerd[1476]: time="2025-09-05T23:52:30.107226771Z" level=info msg="RemovePodSandbox for \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" Sep 5 23:52:30.109273 containerd[1476]: time="2025-09-05T23:52:30.107259971Z" level=info msg="Forcibly stopping sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\"" Sep 5 23:52:30.192827 containerd[1476]: time="2025-09-05T23:52:30.192482314Z" level=info msg="StartContainer for \"48716596d38090e4a14e9528bfd6736403f72f1335dcff4b0979284b9b8b602b\" returns successfully" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.164 [WARNING][5242] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79bda304-0fec-4e72-8abd-1ec79680ee8b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"41481528f5c64c569a9d88509a2acfdb28d6a00cd310fe18e594306f0323fefd", Pod:"coredns-668d6bf9bc-xstw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife71bd35071", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.164 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.164 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" iface="eth0" netns="" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.164 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.164 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.200 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.200 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.200 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.210 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.210 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" HandleID="k8s-pod-network.68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--xstw4-eth0" Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.213 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.218501 containerd[1476]: 2025-09-05 23:52:30.215 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6" Sep 5 23:52:30.219596 containerd[1476]: time="2025-09-05T23:52:30.219096666Z" level=info msg="TearDown network for sandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" successfully" Sep 5 23:52:30.236910 containerd[1476]: time="2025-09-05T23:52:30.236844727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:30.237058 containerd[1476]: time="2025-09-05T23:52:30.236935567Z" level=info msg="RemovePodSandbox \"68580312b96b81e9914ce17fa07763eb373082e76e610a8404f8112f7f8efbd6\" returns successfully" Sep 5 23:52:30.240088 containerd[1476]: time="2025-09-05T23:52:30.239671130Z" level=info msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.293 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588", Pod:"coredns-668d6bf9bc-jphsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76aa7fe2584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.294 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.294 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" iface="eth0" netns="" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.294 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.294 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.322 [INFO][5294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.323 [INFO][5294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.323 [INFO][5294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.340 [WARNING][5294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.340 [INFO][5294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.343 [INFO][5294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.346713 containerd[1476]: 2025-09-05 23:52:30.345 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.347672 containerd[1476]: time="2025-09-05T23:52:30.346607739Z" level=info msg="TearDown network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" successfully" Sep 5 23:52:30.347672 containerd[1476]: time="2025-09-05T23:52:30.347240979Z" level=info msg="StopPodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" returns successfully" Sep 5 23:52:30.349168 containerd[1476]: time="2025-09-05T23:52:30.349122182Z" level=info msg="RemovePodSandbox for \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" Sep 5 23:52:30.349168 containerd[1476]: time="2025-09-05T23:52:30.349167422Z" level=info msg="Forcibly stopping sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\"" Sep 5 23:52:30.440527 kubelet[2566]: I0905 23:52:30.440414 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fdcb9f565-lqnhs" podStartSLOduration=28.61491128 podStartE2EDuration="36.440395771s" podCreationTimestamp="2025-09-05 23:51:54 +0000 UTC" firstStartedPulling="2025-09-05 23:52:22.102585663 +0000 UTC m=+53.195837057" lastFinishedPulling="2025-09-05 23:52:29.928070154 +0000 UTC m=+61.021321548" observedRunningTime="2025-09-05 23:52:30.439931171 +0000 UTC m=+61.533182605" watchObservedRunningTime="2025-09-05 23:52:30.440395771 +0000 UTC m=+61.533647165" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.408 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e4cbde1-da6c-4658-8a40-e74e4e58e9f5", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"6c410b8a59a43e8b1cb6b839c56670b90c91d0efda5aa6c1120b0dd4430a9588", Pod:"coredns-668d6bf9bc-jphsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76aa7fe2584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.411 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.411 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" iface="eth0" netns="" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.412 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.414 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.490 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.494 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.496 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.517 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.517 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" HandleID="k8s-pod-network.0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-coredns--668d6bf9bc--jphsd-eth0" Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.520 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.532795 containerd[1476]: 2025-09-05 23:52:30.526 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11" Sep 5 23:52:30.533693 containerd[1476]: time="2025-09-05T23:52:30.532839602Z" level=info msg="TearDown network for sandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" successfully" Sep 5 23:52:30.541765 containerd[1476]: time="2025-09-05T23:52:30.541671853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:30.541765 containerd[1476]: time="2025-09-05T23:52:30.541753613Z" level=info msg="RemovePodSandbox \"0ddbb6b87c02b325ee731aa15fb1a68f293d3e687c56dfc37cb1b9aa43a84d11\" returns successfully" Sep 5 23:52:30.543051 containerd[1476]: time="2025-09-05T23:52:30.542723934Z" level=info msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.596 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f45c648-2314-4d67-9e44-04067f334c76", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7", Pod:"csi-node-driver-6tlkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f86651b8bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.596 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.596 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" iface="eth0" netns="" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.597 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.597 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.645 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.645 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.645 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.665 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.665 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.668 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.678867 containerd[1476]: 2025-09-05 23:52:30.676 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.679987 containerd[1476]: time="2025-09-05T23:52:30.679528578Z" level=info msg="TearDown network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" successfully" Sep 5 23:52:30.679987 containerd[1476]: time="2025-09-05T23:52:30.679666978Z" level=info msg="StopPodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" returns successfully" Sep 5 23:52:30.680921 containerd[1476]: time="2025-09-05T23:52:30.680508979Z" level=info msg="RemovePodSandbox for \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" Sep 5 23:52:30.680921 containerd[1476]: time="2025-09-05T23:52:30.680557379Z" level=info msg="Forcibly stopping sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\"" Sep 5 23:52:30.700066 kubelet[2566]: I0905 23:52:30.699360 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68d58495b-ch6j7" podStartSLOduration=24.14517817 podStartE2EDuration="31.699337682s" podCreationTimestamp="2025-09-05 23:51:59 +0000 UTC" firstStartedPulling="2025-09-05 23:52:21.959840733 +0000 UTC m=+53.053092127" lastFinishedPulling="2025-09-05 23:52:29.514000245 +0000 UTC m=+60.607251639" observedRunningTime="2025-09-05 23:52:30.499104762 +0000 UTC m=+61.592356156" watchObservedRunningTime="2025-09-05 23:52:30.699337682 +0000 UTC m=+61.792589076" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.761 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f45c648-2314-4d67-9e44-04067f334c76", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7", Pod:"csi-node-driver-6tlkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f86651b8bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.762 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.762 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" iface="eth0" netns="" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.762 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.762 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.786 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.786 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.786 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.797 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.797 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" HandleID="k8s-pod-network.a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-csi--node--driver--6tlkz-eth0" Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.801 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.805802 containerd[1476]: 2025-09-05 23:52:30.803 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2" Sep 5 23:52:30.807013 containerd[1476]: time="2025-09-05T23:52:30.805926610Z" level=info msg="TearDown network for sandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" successfully" Sep 5 23:52:30.811713 containerd[1476]: time="2025-09-05T23:52:30.811406136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:30.811713 containerd[1476]: time="2025-09-05T23:52:30.811571977Z" level=info msg="RemovePodSandbox \"a6a2e7bfc4ec4f8c341e00153954f0f515aa1064ed6fc779f3620fa35896dca2\" returns successfully" Sep 5 23:52:30.813188 containerd[1476]: time="2025-09-05T23:52:30.812699178Z" level=info msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.861 [WARNING][5408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8fb7313-1f3f-4238-86b4-1214e62f55c2", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328", Pod:"calico-apiserver-6fdcb9f565-bpnlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c26e2e82ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.862 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.862 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" iface="eth0" netns="" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.862 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.862 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.888 [INFO][5415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.888 [INFO][5415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.889 [INFO][5415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.899 [WARNING][5415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.899 [INFO][5415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.901 [INFO][5415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:30.907359 containerd[1476]: 2025-09-05 23:52:30.904 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:30.907359 containerd[1476]: time="2025-09-05T23:52:30.907093171Z" level=info msg="TearDown network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" successfully" Sep 5 23:52:30.907359 containerd[1476]: time="2025-09-05T23:52:30.907119931Z" level=info msg="StopPodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" returns successfully" Sep 5 23:52:30.909314 containerd[1476]: time="2025-09-05T23:52:30.908148653Z" level=info msg="RemovePodSandbox for \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" Sep 5 23:52:30.909314 containerd[1476]: time="2025-09-05T23:52:30.908182413Z" level=info msg="Forcibly stopping sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\"" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:30.959 [WARNING][5430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0", GenerateName:"calico-apiserver-6fdcb9f565-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8fb7313-1f3f-4238-86b4-1214e62f55c2", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdcb9f565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"843d4d11abdab8d9dfedbc04d27e28096d6620368905c09772da11413a06e328", Pod:"calico-apiserver-6fdcb9f565-bpnlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c26e2e82ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:30.960 [INFO][5430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:30.960 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" iface="eth0" netns="" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:30.960 [INFO][5430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:30.960 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.000 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.001 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.001 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.015 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.015 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" HandleID="k8s-pod-network.ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--apiserver--6fdcb9f565--bpnlv-eth0" Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.017 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:31.026663 containerd[1476]: 2025-09-05 23:52:31.024 [INFO][5430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670" Sep 5 23:52:31.028603 containerd[1476]: time="2025-09-05T23:52:31.026638114Z" level=info msg="TearDown network for sandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" successfully" Sep 5 23:52:31.033530 containerd[1476]: time="2025-09-05T23:52:31.033480522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:31.033687 containerd[1476]: time="2025-09-05T23:52:31.033616642Z" level=info msg="RemovePodSandbox \"ea4dc5a7068e502e8ca4cf00443ff13eb4da2cbd6b13d3a7dc5fb29c34e00670\" returns successfully" Sep 5 23:52:31.035502 containerd[1476]: time="2025-09-05T23:52:31.035458844Z" level=info msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.166 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0", GenerateName:"calico-kube-controllers-68d58495b-", Namespace:"calico-system", SelfLink:"", UID:"08ac5769-ff1d-4260-a1f9-1ac84156590f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58495b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3", Pod:"calico-kube-controllers-68d58495b-ch6j7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefadc3171ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.167 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.167 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" iface="eth0" netns="" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.167 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.167 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.198 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.198 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.198 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.209 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.210 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.212 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:31.215172 containerd[1476]: 2025-09-05 23:52:31.213 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.216276 containerd[1476]: time="2025-09-05T23:52:31.215221615Z" level=info msg="TearDown network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" successfully" Sep 5 23:52:31.216276 containerd[1476]: time="2025-09-05T23:52:31.215252255Z" level=info msg="StopPodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" returns successfully" Sep 5 23:52:31.216724 containerd[1476]: time="2025-09-05T23:52:31.216683297Z" level=info msg="RemovePodSandbox for \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" Sep 5 23:52:31.216788 containerd[1476]: time="2025-09-05T23:52:31.216724017Z" level=info msg="Forcibly stopping sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\"" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.269 [WARNING][5475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0", GenerateName:"calico-kube-controllers-68d58495b-", Namespace:"calico-system", SelfLink:"", UID:"08ac5769-ff1d-4260-a1f9-1ac84156590f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58495b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-2b989ca6ad", ContainerID:"8f7d939b4197047f521d2d8376da5caee17246f7e7915a0f617c65395b8d23b3", Pod:"calico-kube-controllers-68d58495b-ch6j7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefadc3171ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.270 [INFO][5475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.270 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" iface="eth0" netns="" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.270 [INFO][5475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.270 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.304 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.304 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.304 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.314 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.314 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" HandleID="k8s-pod-network.ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-calico--kube--controllers--68d58495b--ch6j7-eth0" Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.316 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:31.323603 containerd[1476]: 2025-09-05 23:52:31.319 [INFO][5475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178" Sep 5 23:52:31.323603 containerd[1476]: time="2025-09-05T23:52:31.322447101Z" level=info msg="TearDown network for sandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" successfully" Sep 5 23:52:31.334583 containerd[1476]: time="2025-09-05T23:52:31.331491791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:31.334583 containerd[1476]: time="2025-09-05T23:52:31.331598752Z" level=info msg="RemovePodSandbox \"ff2548a85edc004b861675c4ba1c3a251cecbd046958d9ed0897bc296234b178\" returns successfully" Sep 5 23:52:31.335986 containerd[1476]: time="2025-09-05T23:52:31.335784516Z" level=info msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" Sep 5 23:52:31.510661 containerd[1476]: time="2025-09-05T23:52:31.509636640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.513848 containerd[1476]: time="2025-09-05T23:52:31.513793285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:52:31.516170 containerd[1476]: time="2025-09-05T23:52:31.516123248Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.520955 containerd[1476]: time="2025-09-05T23:52:31.520773493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.523156 containerd[1476]: time="2025-09-05T23:52:31.522280415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.592949499s" Sep 5 23:52:31.523156 containerd[1476]: time="2025-09-05T23:52:31.522328495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:52:31.530575 containerd[1476]: time="2025-09-05T23:52:31.530502985Z" level=info msg="CreateContainer within sandbox \"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.426 [WARNING][5501] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.428 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.428 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" iface="eth0" netns="" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.428 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.428 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.504 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.505 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.505 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.521 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.521 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.523 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:31.533511 containerd[1476]: 2025-09-05 23:52:31.530 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.534163 containerd[1476]: time="2025-09-05T23:52:31.533632428Z" level=info msg="TearDown network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" successfully" Sep 5 23:52:31.534163 containerd[1476]: time="2025-09-05T23:52:31.533661148Z" level=info msg="StopPodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" returns successfully" Sep 5 23:52:31.535377 containerd[1476]: time="2025-09-05T23:52:31.535024510Z" level=info msg="RemovePodSandbox for \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" Sep 5 23:52:31.535377 containerd[1476]: time="2025-09-05T23:52:31.535063390Z" level=info msg="Forcibly stopping sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\"" Sep 5 23:52:31.553846 containerd[1476]: time="2025-09-05T23:52:31.553799212Z" level=info msg="CreateContainer within sandbox \"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e3aa0684229a1dce90f68d043c46e142b1b1ebc583cefa3bdc925555c94105f\"" Sep 5 23:52:31.555847 containerd[1476]: time="2025-09-05T23:52:31.555714374Z" level=info msg="StartContainer for \"8e3aa0684229a1dce90f68d043c46e142b1b1ebc583cefa3bdc925555c94105f\"" Sep 5 23:52:31.608771 systemd[1]: Started cri-containerd-8e3aa0684229a1dce90f68d043c46e142b1b1ebc583cefa3bdc925555c94105f.scope - libcontainer container 8e3aa0684229a1dce90f68d043c46e142b1b1ebc583cefa3bdc925555c94105f. Sep 5 23:52:31.666697 containerd[1476]: time="2025-09-05T23:52:31.666532704Z" level=info msg="StartContainer for \"8e3aa0684229a1dce90f68d043c46e142b1b1ebc583cefa3bdc925555c94105f\" returns successfully" Sep 5 23:52:31.672164 containerd[1476]: time="2025-09-05T23:52:31.671881430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.630 [WARNING][5522] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" WorkloadEndpoint="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.630 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.630 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" iface="eth0" netns="" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.630 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.630 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.685 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.686 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.686 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.703 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.703 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" HandleID="k8s-pod-network.7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Workload="ci--4081--3--5--n--2b989ca6ad-k8s-whisker--6c798c5dc4--n4jj8-eth0" Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.706 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:52:31.711611 containerd[1476]: 2025-09-05 23:52:31.709 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001" Sep 5 23:52:31.711611 containerd[1476]: time="2025-09-05T23:52:31.711186196Z" level=info msg="TearDown network for sandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" successfully" Sep 5 23:52:31.717484 containerd[1476]: time="2025-09-05T23:52:31.717439364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:52:31.717984 containerd[1476]: time="2025-09-05T23:52:31.717763484Z" level=info msg="RemovePodSandbox \"7408f7fae0956a4cfa891fa2c090af8b4b64904b25d4f81bc19c22ce10d57001\" returns successfully" Sep 5 23:52:33.473667 containerd[1476]: time="2025-09-05T23:52:33.473617050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.474799 containerd[1476]: time="2025-09-05T23:52:33.474758291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:52:33.476595 containerd[1476]: time="2025-09-05T23:52:33.475788572Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.479561 containerd[1476]: time="2025-09-05T23:52:33.479500496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.480698 containerd[1476]: time="2025-09-05T23:52:33.480654297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.808726827s" Sep 5 23:52:33.480913 containerd[1476]: time="2025-09-05T23:52:33.480871938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:52:33.484185 containerd[1476]: time="2025-09-05T23:52:33.484125581Z" level=info msg="CreateContainer within sandbox \"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:52:33.507159 containerd[1476]: time="2025-09-05T23:52:33.507105727Z" level=info msg="CreateContainer within sandbox \"e92e2fcf1c2feb028ec92050c3e3bf0ebafa90f270cc37e1383bd891f739b2f7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fb0606dc1322c2c1b5961afc184bffe9d9e61274c50c67806befb55b35a499d\"" Sep 5 23:52:33.508931 containerd[1476]: time="2025-09-05T23:52:33.508865529Z" level=info msg="StartContainer for \"2fb0606dc1322c2c1b5961afc184bffe9d9e61274c50c67806befb55b35a499d\"" Sep 5 23:52:33.560952 systemd[1]: Started cri-containerd-2fb0606dc1322c2c1b5961afc184bffe9d9e61274c50c67806befb55b35a499d.scope - libcontainer container 2fb0606dc1322c2c1b5961afc184bffe9d9e61274c50c67806befb55b35a499d. Sep 5 23:52:33.615661 containerd[1476]: time="2025-09-05T23:52:33.615441928Z" level=info msg="StartContainer for \"2fb0606dc1322c2c1b5961afc184bffe9d9e61274c50c67806befb55b35a499d\" returns successfully" Sep 5 23:52:34.188924 kubelet[2566]: I0905 23:52:34.188861 2566 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:52:34.195470 kubelet[2566]: I0905 23:52:34.195379 2566 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:52:42.379867 kubelet[2566]: I0905 23:52:42.379726 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6tlkz" podStartSLOduration=32.004966929 podStartE2EDuration="43.379707878s" podCreationTimestamp="2025-09-05 23:51:59 +0000 UTC" firstStartedPulling="2025-09-05 23:52:22.10731927 +0000 UTC m=+53.200570664" lastFinishedPulling="2025-09-05 23:52:33.482060219 +0000 UTC m=+64.575311613" observedRunningTime="2025-09-05 23:52:34.498299063 +0000 UTC m=+65.591550417" watchObservedRunningTime="2025-09-05 23:52:42.379707878 +0000 UTC m=+73.472959272" Sep 5 23:54:00.478856 systemd[1]: run-containerd-runc-k8s.io-f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f-runc.1H1nZl.mount: Deactivated successfully. Sep 5 23:54:06.446314 systemd[1]: run-containerd-runc-k8s.io-f8be8fe81a16b3f222738ae0d1018a4ce1cc2023f4ff0c23c85a6e50bff6049f-runc.WEeKHG.mount: Deactivated successfully. Sep 5 23:54:13.790081 systemd[1]: Started sshd@7-91.98.45.119:22-139.178.68.195:38010.service - OpenSSH per-connection server daemon (139.178.68.195:38010). Sep 5 23:54:14.794204 sshd[5938]: Accepted publickey for core from 139.178.68.195 port 38010 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:14.796884 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:14.802229 systemd-logind[1458]: New session 8 of user core. Sep 5 23:54:14.808800 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:54:15.611954 sshd[5938]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:15.621119 systemd[1]: sshd@7-91.98.45.119:22-139.178.68.195:38010.service: Deactivated successfully. Sep 5 23:54:15.624529 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:54:15.626862 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:54:15.628702 systemd-logind[1458]: Removed session 8. Sep 5 23:54:20.791052 systemd[1]: Started sshd@8-91.98.45.119:22-139.178.68.195:42808.service - OpenSSH per-connection server daemon (139.178.68.195:42808). Sep 5 23:54:21.789427 sshd[5953]: Accepted publickey for core from 139.178.68.195 port 42808 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:21.791076 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:21.801272 systemd-logind[1458]: New session 9 of user core. Sep 5 23:54:21.810236 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:54:22.550013 sshd[5953]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:22.554772 systemd[1]: sshd@8-91.98.45.119:22-139.178.68.195:42808.service: Deactivated successfully. Sep 5 23:54:22.557982 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:22.560904 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:22.562447 systemd-logind[1458]: Removed session 9. Sep 5 23:54:22.726920 systemd[1]: Started sshd@9-91.98.45.119:22-139.178.68.195:42816.service - OpenSSH per-connection server daemon (139.178.68.195:42816). Sep 5 23:54:23.719930 sshd[5968]: Accepted publickey for core from 139.178.68.195 port 42816 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:23.723518 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:23.729273 systemd-logind[1458]: New session 10 of user core. Sep 5 23:54:23.738878 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:54:24.530956 sshd[5968]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:24.537242 systemd[1]: sshd@9-91.98.45.119:22-139.178.68.195:42816.service: Deactivated successfully. Sep 5 23:54:24.540795 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:54:24.542085 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:54:24.543178 systemd-logind[1458]: Removed session 10. Sep 5 23:54:24.715102 systemd[1]: Started sshd@10-91.98.45.119:22-139.178.68.195:42826.service - OpenSSH per-connection server daemon (139.178.68.195:42826). Sep 5 23:54:25.709262 sshd[5979]: Accepted publickey for core from 139.178.68.195 port 42826 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:25.711750 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:25.717520 systemd-logind[1458]: New session 11 of user core. Sep 5 23:54:25.721737 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:54:26.482623 sshd[5979]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:26.487413 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:54:26.487760 systemd[1]: sshd@10-91.98.45.119:22-139.178.68.195:42826.service: Deactivated successfully. Sep 5 23:54:26.491359 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:54:26.494939 systemd-logind[1458]: Removed session 11. Sep 5 23:54:31.659174 systemd[1]: Started sshd@11-91.98.45.119:22-139.178.68.195:40362.service - OpenSSH per-connection server daemon (139.178.68.195:40362). Sep 5 23:54:32.652382 sshd[6056]: Accepted publickey for core from 139.178.68.195 port 40362 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:32.656073 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:32.661514 systemd-logind[1458]: New session 12 of user core. Sep 5 23:54:32.666767 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:54:33.410570 sshd[6056]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:33.415714 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:54:33.416724 systemd[1]: sshd@11-91.98.45.119:22-139.178.68.195:40362.service: Deactivated successfully. Sep 5 23:54:33.420881 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:54:33.422500 systemd-logind[1458]: Removed session 12. Sep 5 23:54:38.594818 systemd[1]: Started sshd@12-91.98.45.119:22-139.178.68.195:40376.service - OpenSSH per-connection server daemon (139.178.68.195:40376). Sep 5 23:54:39.583165 sshd[6070]: Accepted publickey for core from 139.178.68.195 port 40376 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:39.586556 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:39.591276 systemd-logind[1458]: New session 13 of user core. Sep 5 23:54:39.596723 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:40.348136 sshd[6070]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:40.354093 systemd[1]: sshd@12-91.98.45.119:22-139.178.68.195:40376.service: Deactivated successfully. Sep 5 23:54:40.357480 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:40.359456 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:40.360464 systemd-logind[1458]: Removed session 13. Sep 5 23:54:45.555082 systemd[1]: Started sshd@13-91.98.45.119:22-139.178.68.195:40926.service - OpenSSH per-connection server daemon (139.178.68.195:40926). Sep 5 23:54:46.666880 sshd[6102]: Accepted publickey for core from 139.178.68.195 port 40926 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:46.668118 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:46.674589 systemd-logind[1458]: New session 14 of user core. Sep 5 23:54:46.680827 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:54:47.549559 sshd[6102]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:47.555581 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:54:47.557150 systemd[1]: sshd@13-91.98.45.119:22-139.178.68.195:40926.service: Deactivated successfully. Sep 5 23:54:47.560359 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:54:47.562657 systemd-logind[1458]: Removed session 14. Sep 5 23:54:47.718790 systemd[1]: Started sshd@14-91.98.45.119:22-139.178.68.195:40930.service - OpenSSH per-connection server daemon (139.178.68.195:40930). Sep 5 23:54:48.715805 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 40930 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:48.716932 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:48.725700 systemd-logind[1458]: New session 15 of user core. Sep 5 23:54:48.732323 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:54:49.680209 sshd[6115]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:49.684807 systemd[1]: sshd@14-91.98.45.119:22-139.178.68.195:40930.service: Deactivated successfully. Sep 5 23:54:49.688297 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:54:49.690654 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:54:49.692174 systemd-logind[1458]: Removed session 15. Sep 5 23:54:49.864448 systemd[1]: Started sshd@15-91.98.45.119:22-139.178.68.195:40936.service - OpenSSH per-connection server daemon (139.178.68.195:40936). Sep 5 23:54:50.876080 sshd[6127]: Accepted publickey for core from 139.178.68.195 port 40936 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:50.879344 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:50.886230 systemd-logind[1458]: New session 16 of user core. Sep 5 23:54:50.890777 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:54:52.280159 sshd[6127]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:52.286719 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:54:52.287309 systemd[1]: sshd@15-91.98.45.119:22-139.178.68.195:40936.service: Deactivated successfully. Sep 5 23:54:52.292294 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:54:52.294083 systemd-logind[1458]: Removed session 16. Sep 5 23:54:52.454879 systemd[1]: Started sshd@16-91.98.45.119:22-139.178.68.195:54340.service - OpenSSH per-connection server daemon (139.178.68.195:54340). Sep 5 23:54:53.449288 sshd[6148]: Accepted publickey for core from 139.178.68.195 port 54340 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:53.450383 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:53.456577 systemd-logind[1458]: New session 17 of user core. Sep 5 23:54:53.463898 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:54:54.361517 sshd[6148]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:54.367457 systemd[1]: sshd@16-91.98.45.119:22-139.178.68.195:54340.service: Deactivated successfully. Sep 5 23:54:54.369839 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:54:54.370851 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:54:54.372404 systemd-logind[1458]: Removed session 17. Sep 5 23:54:54.556868 systemd[1]: Started sshd@17-91.98.45.119:22-139.178.68.195:54352.service - OpenSSH per-connection server daemon (139.178.68.195:54352). Sep 5 23:54:55.605973 sshd[6165]: Accepted publickey for core from 139.178.68.195 port 54352 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:55.608927 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:55.614415 systemd-logind[1458]: New session 18 of user core. Sep 5 23:54:55.619768 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:54:56.400409 sshd[6165]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:56.409159 systemd[1]: sshd@17-91.98.45.119:22-139.178.68.195:54352.service: Deactivated successfully. Sep 5 23:54:56.413322 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:54:56.414511 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:54:56.415659 systemd-logind[1458]: Removed session 18. Sep 5 23:55:01.570894 systemd[1]: Started sshd@18-91.98.45.119:22-139.178.68.195:32882.service - OpenSSH per-connection server daemon (139.178.68.195:32882). Sep 5 23:55:02.567385 sshd[6219]: Accepted publickey for core from 139.178.68.195 port 32882 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:02.569290 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:02.577744 systemd-logind[1458]: New session 19 of user core. Sep 5 23:55:02.583770 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:55:03.346994 sshd[6219]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:03.355567 systemd[1]: sshd@18-91.98.45.119:22-139.178.68.195:32882.service: Deactivated successfully. Sep 5 23:55:03.359617 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:55:03.365799 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:55:03.368372 systemd-logind[1458]: Removed session 19. Sep 5 23:55:08.526034 systemd[1]: Started sshd@19-91.98.45.119:22-139.178.68.195:32892.service - OpenSSH per-connection server daemon (139.178.68.195:32892). Sep 5 23:55:09.518550 sshd[6254]: Accepted publickey for core from 139.178.68.195 port 32892 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:09.521157 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:09.526132 systemd-logind[1458]: New session 20 of user core. Sep 5 23:55:09.533891 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:55:10.295908 sshd[6254]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:10.301306 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:55:10.302059 systemd[1]: sshd@19-91.98.45.119:22-139.178.68.195:32892.service: Deactivated successfully. Sep 5 23:55:10.305061 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:55:10.307049 systemd-logind[1458]: Removed session 20. Sep 5 23:55:24.851744 systemd[1]: cri-containerd-f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3.scope: Deactivated successfully. Sep 5 23:55:24.854810 systemd[1]: cri-containerd-f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3.scope: Consumed 21.223s CPU time. Sep 5 23:55:24.876682 containerd[1476]: time="2025-09-05T23:55:24.876234534Z" level=info msg="shim disconnected" id=f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3 namespace=k8s.io Sep 5 23:55:24.876682 containerd[1476]: time="2025-09-05T23:55:24.876298055Z" level=warning msg="cleaning up after shim disconnected" id=f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3 namespace=k8s.io Sep 5 23:55:24.876682 containerd[1476]: time="2025-09-05T23:55:24.876309895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:24.877795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3-rootfs.mount: Deactivated successfully. Sep 5 23:55:24.977695 kubelet[2566]: I0905 23:55:24.977211 2566 scope.go:117] "RemoveContainer" containerID="f00508e68e3e57fcf47e8c0cde709b9c673090401f43475aebae4b75f23410e3" Sep 5 23:55:24.980694 containerd[1476]: time="2025-09-05T23:55:24.980644186Z" level=info msg="CreateContainer within sandbox \"e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 23:55:24.998072 containerd[1476]: time="2025-09-05T23:55:24.997914821Z" level=info msg="CreateContainer within sandbox \"e38103ec5e5b27802e649de375ca937fe34bb085cf9d9209b3d2941c60584294\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ba815d6126f7d705d043f4e0f95be8ed1a826d974aecc9851ebe626b017be3c7\"" Sep 5 23:55:24.999675 containerd[1476]: time="2025-09-05T23:55:24.999607636Z" level=info msg="StartContainer for \"ba815d6126f7d705d043f4e0f95be8ed1a826d974aecc9851ebe626b017be3c7\"" Sep 5 23:55:25.034755 systemd[1]: Started cri-containerd-ba815d6126f7d705d043f4e0f95be8ed1a826d974aecc9851ebe626b017be3c7.scope - libcontainer container ba815d6126f7d705d043f4e0f95be8ed1a826d974aecc9851ebe626b017be3c7. Sep 5 23:55:25.062983 containerd[1476]: time="2025-09-05T23:55:25.062931075Z" level=info msg="StartContainer for \"ba815d6126f7d705d043f4e0f95be8ed1a826d974aecc9851ebe626b017be3c7\" returns successfully" Sep 5 23:55:25.226456 systemd[1]: cri-containerd-a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275.scope: Deactivated successfully. Sep 5 23:55:25.228895 systemd[1]: cri-containerd-a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275.scope: Consumed 4.179s CPU time, 17.7M memory peak, 0B memory swap peak. Sep 5 23:55:25.257869 kubelet[2566]: E0905 23:55:25.257749 2566 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58674->10.0.0.2:2379: read: connection timed out" Sep 5 23:55:25.270060 containerd[1476]: time="2025-09-05T23:55:25.269995224Z" level=info msg="shim disconnected" id=a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275 namespace=k8s.io Sep 5 23:55:25.270060 containerd[1476]: time="2025-09-05T23:55:25.270054025Z" level=warning msg="cleaning up after shim disconnected" id=a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275 namespace=k8s.io Sep 5 23:55:25.270060 containerd[1476]: time="2025-09-05T23:55:25.270067865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:25.878285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275-rootfs.mount: Deactivated successfully. Sep 5 23:55:25.984395 kubelet[2566]: I0905 23:55:25.984356 2566 scope.go:117] "RemoveContainer" containerID="a729d1a6a543997da2a63e07374e9196e41a7abbe6de6d18d2524e257ef38275" Sep 5 23:55:25.986668 containerd[1476]: time="2025-09-05T23:55:25.986625954Z" level=info msg="CreateContainer within sandbox \"341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 5 23:55:26.002698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606454216.mount: Deactivated successfully. Sep 5 23:55:26.004302 containerd[1476]: time="2025-09-05T23:55:26.004228909Z" level=info msg="CreateContainer within sandbox \"341070dedd30ed22b3f3c8ef6dea27156a9cd91b69e5a4066cca4c3b57d335ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"867f77106fcbf9bb74759191a1165299b46181c9b68d48f781114a682f3d7100\"" Sep 5 23:55:26.004906 containerd[1476]: time="2025-09-05T23:55:26.004790274Z" level=info msg="StartContainer for \"867f77106fcbf9bb74759191a1165299b46181c9b68d48f781114a682f3d7100\"" Sep 5 23:55:26.038730 systemd[1]: Started cri-containerd-867f77106fcbf9bb74759191a1165299b46181c9b68d48f781114a682f3d7100.scope - libcontainer container 867f77106fcbf9bb74759191a1165299b46181c9b68d48f781114a682f3d7100. Sep 5 23:55:26.085989 containerd[1476]: time="2025-09-05T23:55:26.085944543Z" level=info msg="StartContainer for \"867f77106fcbf9bb74759191a1165299b46181c9b68d48f781114a682f3d7100\" returns successfully" Sep 5 23:55:30.360884 systemd[1]: cri-containerd-4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224.scope: Deactivated successfully. Sep 5 23:55:30.361456 systemd[1]: cri-containerd-4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224.scope: Consumed 3.636s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 5 23:55:30.387522 containerd[1476]: time="2025-09-05T23:55:30.385872353Z" level=info msg="shim disconnected" id=4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224 namespace=k8s.io Sep 5 23:55:30.387522 containerd[1476]: time="2025-09-05T23:55:30.385937713Z" level=warning msg="cleaning up after shim disconnected" id=4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224 namespace=k8s.io Sep 5 23:55:30.387522 containerd[1476]: time="2025-09-05T23:55:30.385947274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:30.389297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224-rootfs.mount: Deactivated successfully. Sep 5 23:55:30.504707 kubelet[2566]: E0905 23:55:30.504399 2566 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58476->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-2b989ca6ad.1862882a572ae918 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-2b989ca6ad,UID:ceb9ea542bfb62c399dd48be82646d46,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-2b989ca6ad,},FirstTimestamp:2025-09-05 23:55:20.060348696 +0000 UTC m=+231.153600090,LastTimestamp:2025-09-05 23:55:20.060348696 +0000 UTC m=+231.153600090,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-2b989ca6ad,}" Sep 5 23:55:31.006900 kubelet[2566]: I0905 23:55:31.006679 2566 scope.go:117] "RemoveContainer" containerID="4ba34e0a50c1c952c0558fa2d67cf7e4c92b15a2541efe9896963a63b8c74224" Sep 5 23:55:31.008829 containerd[1476]: time="2025-09-05T23:55:31.008704090Z" level=info msg="CreateContainer within sandbox \"c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 5 23:55:31.027985 containerd[1476]: time="2025-09-05T23:55:31.027820929Z" level=info msg="CreateContainer within sandbox \"c7e74a793718a112ea0a9d276d0e7c008e9891611a05f70d4c899bbc76526618\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d8942c9c72ddd7d10dd9297311ba06385f1364efa71207c7ea87e09c51092068\"" Sep 5 23:55:31.029687 containerd[1476]: time="2025-09-05T23:55:31.028625575Z" level=info msg="StartContainer for \"d8942c9c72ddd7d10dd9297311ba06385f1364efa71207c7ea87e09c51092068\"" Sep 5 23:55:31.060320 systemd[1]: Started cri-containerd-d8942c9c72ddd7d10dd9297311ba06385f1364efa71207c7ea87e09c51092068.scope - libcontainer container d8942c9c72ddd7d10dd9297311ba06385f1364efa71207c7ea87e09c51092068. Sep 5 23:55:31.099178 containerd[1476]: time="2025-09-05T23:55:31.099051319Z" level=info msg="StartContainer for \"d8942c9c72ddd7d10dd9297311ba06385f1364efa71207c7ea87e09c51092068\" returns successfully"