Sep 13 00:26:37.904903 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:26:37.904928 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:26:37.904938 kernel: KASLR enabled Sep 13 00:26:37.904944 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 13 00:26:37.904950 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 13 00:26:37.904956 kernel: random: crng init done Sep 13 00:26:37.904963 kernel: ACPI: Early table checksum verification disabled Sep 13 00:26:37.904969 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 13 00:26:37.904975 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:26:37.904983 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.904989 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.904995 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905001 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905007 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905015 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905023 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905029 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905035 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:26:37.905042 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:26:37.905048 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 13 00:26:37.905054 kernel: NUMA: Failed to initialise from firmware Sep 13 00:26:37.905061 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:26:37.905068 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 13 00:26:37.905074 kernel: Zone ranges: Sep 13 00:26:37.905080 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:26:37.905088 kernel: DMA32 empty Sep 13 00:26:37.905104 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 13 00:26:37.905114 kernel: Movable zone start for each node Sep 13 00:26:37.905121 kernel: Early memory node ranges Sep 13 00:26:37.905127 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 13 00:26:37.905133 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 13 00:26:37.905140 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 13 00:26:37.905146 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 13 00:26:37.905152 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 13 00:26:37.905158 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 13 00:26:37.905200 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 13 00:26:37.905209 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:26:37.905219 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 13 00:26:37.905225 kernel: psci: probing for conduit method from ACPI. Sep 13 00:26:37.905232 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:26:37.905241 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:26:37.905248 kernel: psci: Trusted OS migration not required Sep 13 00:26:37.905255 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:26:37.905263 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:26:37.905270 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:26:37.905277 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:26:37.905284 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:26:37.905290 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:26:37.905306 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:26:37.905313 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:26:37.905320 kernel: CPU features: detected: Spectre-v4 Sep 13 00:26:37.905327 kernel: CPU features: detected: Spectre-BHB Sep 13 00:26:37.905333 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:26:37.905342 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:26:37.905349 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:26:37.905356 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:26:37.905363 kernel: alternatives: applying boot alternatives Sep 13 00:26:37.905371 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:26:37.905379 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:26:37.905386 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:26:37.905393 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:26:37.905399 kernel: Fallback order for Node 0: 0 Sep 13 00:26:37.905406 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 13 00:26:37.905413 kernel: Policy zone: Normal Sep 13 00:26:37.905421 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:26:37.905428 kernel: software IO TLB: area num 2. Sep 13 00:26:37.905435 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 13 00:26:37.905442 kernel: Memory: 3882740K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213260K reserved, 0K cma-reserved) Sep 13 00:26:37.905449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:26:37.905456 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:26:37.905464 kernel: rcu: RCU event tracing is enabled. Sep 13 00:26:37.905471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:26:37.905478 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:26:37.905484 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:26:37.905491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:26:37.905502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:26:37.905509 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:26:37.905515 kernel: GICv3: 256 SPIs implemented Sep 13 00:26:37.905522 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:26:37.905529 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:26:37.905536 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:26:37.905543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:26:37.905549 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:26:37.905556 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:26:37.905563 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:26:37.905570 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 13 00:26:37.905577 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 13 00:26:37.905585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:26:37.905592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:26:37.905599 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:26:37.905606 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:26:37.905613 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:26:37.905620 kernel: Console: colour dummy device 80x25 Sep 13 00:26:37.905627 kernel: ACPI: Core revision 20230628 Sep 13 00:26:37.905635 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:26:37.905641 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:26:37.905649 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:26:37.905658 kernel: landlock: Up and running. Sep 13 00:26:37.905665 kernel: SELinux: Initializing. Sep 13 00:26:37.906143 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:26:37.906156 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:26:37.906163 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:26:37.906171 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:26:37.906178 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:26:37.906186 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:26:37.906193 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:26:37.906205 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:26:37.906212 kernel: Remapping and enabling EFI services. Sep 13 00:26:37.906219 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:26:37.906226 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:26:37.906233 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:26:37.906241 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 13 00:26:37.906248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:26:37.906255 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:26:37.906262 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:26:37.906269 kernel: SMP: Total of 2 processors activated. Sep 13 00:26:37.906277 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:26:37.906285 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:26:37.906342 kernel: CPU features: detected: Common not Private translations Sep 13 00:26:37.906353 kernel: CPU features: detected: CRC32 instructions Sep 13 00:26:37.906361 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:26:37.906368 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:26:37.906376 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:26:37.906383 kernel: CPU features: detected: Privileged Access Never Sep 13 00:26:37.906391 kernel: CPU features: detected: RAS Extension Support Sep 13 00:26:37.906400 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:26:37.906407 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:26:37.906415 kernel: alternatives: applying system-wide alternatives Sep 13 00:26:37.906422 kernel: devtmpfs: initialized Sep 13 00:26:37.906430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:26:37.906437 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:26:37.906444 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:26:37.906452 kernel: SMBIOS 3.0.0 present. Sep 13 00:26:37.906461 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 13 00:26:37.906468 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:26:37.906476 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:26:37.906483 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:26:37.906491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:26:37.906499 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:26:37.906506 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:26:37.906514 kernel: cpuidle: using governor menu Sep 13 00:26:37.906521 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Sep 13 00:26:37.906530 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:26:37.906538 kernel: ASID allocator initialised with 32768 entries Sep 13 00:26:37.906545 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:26:37.906553 kernel: Serial: AMBA PL011 UART driver Sep 13 00:26:37.906560 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:26:37.906567 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:26:37.906575 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:26:37.906582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:26:37.906590 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:26:37.906599 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:26:37.906606 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:26:37.906614 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:26:37.906621 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:26:37.906629 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:26:37.906637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:26:37.906644 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:26:37.906651 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:26:37.906659 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:26:37.906668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:26:37.906689 kernel: ACPI: Interpreter enabled Sep 13 00:26:37.906959 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:26:37.906967 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:26:37.906975 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:26:37.906982 kernel: printk: console [ttyAMA0] enabled Sep 13 00:26:37.906990 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:26:37.907144 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:26:37.907224 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:26:37.907291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:26:37.907376 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:26:37.907441 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:26:37.907451 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:26:37.907459 kernel: PCI host bridge to bus 0000:00 Sep 13 00:26:37.907531 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:26:37.907597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:26:37.907657 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:26:37.909839 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:26:37.909995 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:26:37.910085 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 13 00:26:37.911023 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 13 00:26:37.911122 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:26:37.911218 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.911289 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 13 00:26:37.911429 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.911500 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 13 00:26:37.911573 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.911639 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 13 00:26:37.911738 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.911807 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 13 00:26:37.911882 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.911948 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 13 00:26:37.912022 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.912087 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 13 00:26:37.912190 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.912262 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 13 00:26:37.912353 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.912424 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 13 00:26:37.912496 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:26:37.912563 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 13 00:26:37.912647 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 13 00:26:37.916545 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 13 00:26:37.916766 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:26:37.916850 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 13 00:26:37.916920 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:26:37.916991 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:26:37.917088 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:26:37.917189 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 13 00:26:37.917282 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:26:37.917555 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 13 00:26:37.917632 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 13 00:26:37.917742 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:26:37.917819 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 13 00:26:37.917907 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:26:37.917979 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 13 00:26:37.918056 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:26:37.918129 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 13 00:26:37.918201 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:26:37.918284 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:26:37.918379 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 13 00:26:37.918452 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 13 00:26:37.918521 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:26:37.918593 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 13 00:26:37.918661 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:26:37.919005 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:26:37.919089 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 13 00:26:37.919164 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 13 00:26:37.919230 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 13 00:26:37.919320 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 00:26:37.919398 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:26:37.919466 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:26:37.919539 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 00:26:37.919605 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 13 00:26:37.921783 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 13 00:26:37.921944 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 00:26:37.922013 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 13 00:26:37.922080 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 13 00:26:37.922151 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:26:37.922220 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:26:37.922285 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:26:37.922379 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:26:37.922457 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:26:37.922524 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:26:37.922598 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:26:37.922664 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:26:37.922795 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:26:37.922867 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:26:37.922932 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:26:37.923001 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:26:37.923072 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 13 00:26:37.923139 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:26:37.923208 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 13 00:26:37.923277 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:26:37.923411 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 13 00:26:37.923483 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:26:37.923558 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 13 00:26:37.923626 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:26:37.923715 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 13 00:26:37.923785 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:26:37.923853 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 13 00:26:37.923919 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:26:37.923987 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 13 00:26:37.924058 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:26:37.924125 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 13 00:26:37.924190 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:26:37.924257 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 13 00:26:37.924349 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:26:37.924425 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 13 00:26:37.924491 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 13 00:26:37.924562 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 13 00:26:37.924627 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:26:37.924760 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 13 00:26:37.924829 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:26:37.924895 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 13 00:26:37.924960 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:26:37.925027 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 13 00:26:37.925092 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 00:26:37.925163 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 13 00:26:37.925228 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 00:26:37.925311 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 13 00:26:37.925382 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 00:26:37.925450 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 13 00:26:37.925516 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 00:26:37.925584 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 13 00:26:37.925650 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 00:26:37.925734 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 13 00:26:37.925803 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 13 00:26:37.925875 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 13 00:26:37.925953 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 13 00:26:37.926024 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:26:37.926091 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 13 00:26:37.926158 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:26:37.926228 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 00:26:37.926328 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 13 00:26:37.926406 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:26:37.926482 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 13 00:26:37.926552 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:26:37.926626 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 00:26:37.928393 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 13 00:26:37.928494 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:26:37.928573 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:26:37.928642 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 13 00:26:37.928816 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:26:37.928889 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 00:26:37.928954 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 13 00:26:37.929027 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:26:37.929103 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:26:37.929174 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:26:37.929241 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 00:26:37.929327 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 13 00:26:37.929399 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:26:37.929474 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 13 00:26:37.929544 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:26:37.929614 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 00:26:37.931275 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 13 00:26:37.931415 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:26:37.931497 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 13 00:26:37.931567 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 13 00:26:37.931636 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:26:37.933754 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 00:26:37.933834 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 13 00:26:37.933913 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:26:37.933990 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 13 00:26:37.934062 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 13 00:26:37.934131 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 13 00:26:37.934202 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:26:37.934270 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 00:26:37.934360 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 13 00:26:37.934453 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:26:37.934541 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:26:37.934617 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 00:26:37.934696 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 13 00:26:37.934762 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:26:37.936817 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:26:37.936912 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 13 00:26:37.936978 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 13 00:26:37.937055 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:26:37.937124 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:26:37.937183 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:26:37.937241 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:26:37.937332 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 00:26:37.937400 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 13 00:26:37.937461 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:26:37.937534 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 13 00:26:37.937613 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 13 00:26:37.937760 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:26:37.937842 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 13 00:26:37.937905 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 13 00:26:37.937965 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:26:37.938040 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:26:37.938111 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 13 00:26:37.938181 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:26:37.938262 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 13 00:26:37.938370 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 13 00:26:37.938436 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:26:37.938509 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 13 00:26:37.938576 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 13 00:26:37.938637 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:26:37.940118 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 13 00:26:37.940249 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 13 00:26:37.940367 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:26:37.940453 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 13 00:26:37.940526 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 13 00:26:37.940594 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:26:37.940752 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 13 00:26:37.940838 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 13 00:26:37.940908 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:26:37.940925 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:26:37.940935 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:26:37.940944 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:26:37.940953 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:26:37.940962 kernel: iommu: Default domain type: Translated Sep 13 00:26:37.940971 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:26:37.940980 kernel: efivars: Registered efivars operations Sep 13 00:26:37.940989 kernel: vgaarb: loaded Sep 13 00:26:37.940998 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:26:37.941009 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:26:37.941018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:26:37.941028 kernel: pnp: PnP ACPI init Sep 13 00:26:37.941125 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:26:37.941140 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:26:37.941148 kernel: NET: Registered PF_INET protocol family Sep 13 00:26:37.941158 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:26:37.941167 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:26:37.941176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:26:37.941188 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:26:37.941197 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:26:37.941206 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:26:37.941218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:26:37.941227 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:26:37.941236 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:26:37.941347 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 13 00:26:37.941364 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:26:37.941376 kernel: kvm [1]: HYP mode not available Sep 13 00:26:37.941385 kernel: Initialise system trusted keyrings Sep 13 00:26:37.941393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:26:37.941402 kernel: Key type asymmetric registered Sep 13 00:26:37.941411 kernel: Asymmetric key parser 'x509' registered Sep 13 00:26:37.941420 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:26:37.941429 kernel: io scheduler mq-deadline registered Sep 13 00:26:37.941438 kernel: io scheduler kyber registered Sep 13 00:26:37.941447 kernel: io scheduler bfq registered Sep 13 00:26:37.941457 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:26:37.941545 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 13 00:26:37.941624 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 13 00:26:37.941736 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.941825 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 13 00:26:37.941901 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 13 00:26:37.941978 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.942063 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 13 00:26:37.942140 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 13 00:26:37.942216 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.942306 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 13 00:26:37.942389 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 13 00:26:37.942467 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.942551 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 13 00:26:37.942626 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 13 00:26:37.942765 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.942851 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 13 00:26:37.942943 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 13 00:26:37.943024 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.943110 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 13 00:26:37.943188 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 13 00:26:37.943263 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.943396 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 13 00:26:37.943481 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 13 00:26:37.943560 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.943576 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 13 00:26:37.943660 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 13 00:26:37.943860 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 13 00:26:37.943940 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:26:37.943952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:26:37.943961 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:26:37.943970 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:26:37.944059 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 13 00:26:37.944143 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 13 00:26:37.944155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:26:37.944164 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:26:37.944241 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 13 00:26:37.944253 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 13 00:26:37.944262 kernel: thunder_xcv, ver 1.0 Sep 13 00:26:37.944271 kernel: thunder_bgx, ver 1.0 Sep 13 00:26:37.944279 kernel: nicpf, ver 1.0 Sep 13 00:26:37.944333 kernel: nicvf, ver 1.0 Sep 13 00:26:37.944457 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:26:37.944534 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:26:37 UTC (1757723197) Sep 13 00:26:37.944546 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:26:37.944554 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:26:37.944562 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:26:37.944570 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:26:37.944577 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:26:37.944590 kernel: Segment Routing with IPv6 Sep 13 00:26:37.944598 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:26:37.944606 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:26:37.944613 kernel: Key type dns_resolver registered Sep 13 00:26:37.944623 kernel: registered taskstats version 1 Sep 13 00:26:37.944631 kernel: Loading compiled-in X.509 certificates Sep 13 00:26:37.944639 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:26:37.944664 kernel: Key type .fscrypt registered Sep 13 00:26:37.944816 kernel: Key type fscrypt-provisioning registered Sep 13 00:26:37.944833 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:26:37.944841 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:26:37.944849 kernel: ima: No architecture policies found Sep 13 00:26:37.944857 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:26:37.944865 kernel: clk: Disabling unused clocks Sep 13 00:26:37.944873 kernel: Freeing unused kernel memory: 39488K Sep 13 00:26:37.944880 kernel: Run /init as init process Sep 13 00:26:37.944888 kernel: with arguments: Sep 13 00:26:37.944896 kernel: /init Sep 13 00:26:37.944905 kernel: with environment: Sep 13 00:26:37.944913 kernel: HOME=/ Sep 13 00:26:37.944920 kernel: TERM=linux Sep 13 00:26:37.944928 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:26:37.944938 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:26:37.944949 systemd[1]: Detected virtualization kvm. Sep 13 00:26:37.944957 systemd[1]: Detected architecture arm64. Sep 13 00:26:37.944966 systemd[1]: Running in initrd. Sep 13 00:26:37.944975 systemd[1]: No hostname configured, using default hostname. Sep 13 00:26:37.944983 systemd[1]: Hostname set to . Sep 13 00:26:37.944991 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:26:37.944999 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:26:37.945008 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:26:37.945017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:26:37.945026 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:26:37.945036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:26:37.945044 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:26:37.945053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:26:37.945064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:26:37.945072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:26:37.945081 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:26:37.945089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:26:37.945100 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:26:37.945108 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:26:37.945116 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:26:37.945125 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:26:37.945133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:26:37.945142 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:26:37.945150 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:26:37.945158 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:26:37.945167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:26:37.945177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:26:37.945185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:26:37.945193 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:26:37.945202 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:26:37.945210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:26:37.945218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:26:37.945227 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:26:37.945235 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:26:37.945246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:26:37.945254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:26:37.945262 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:26:37.945271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:26:37.945279 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:26:37.945340 systemd-journald[236]: Collecting audit messages is disabled. Sep 13 00:26:37.945369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:26:37.945378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:37.945387 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:26:37.945397 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:26:37.945406 kernel: Bridge firewalling registered Sep 13 00:26:37.945414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:26:37.945424 systemd-journald[236]: Journal started Sep 13 00:26:37.945445 systemd-journald[236]: Runtime Journal (/run/log/journal/efdbb3bf743449d5977c3c25d192833d) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:26:37.911118 systemd-modules-load[237]: Inserted module 'overlay' Sep 13 00:26:37.938104 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 13 00:26:37.950721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:26:37.952721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:26:37.958150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:26:37.959717 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:26:37.962857 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:26:37.979262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:26:37.987621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:26:37.991722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:26:37.998220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:26:37.999097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:26:38.006963 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:26:38.028688 dracut-cmdline[272]: dracut-dracut-053 Sep 13 00:26:38.030142 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:26:38.038421 systemd-resolved[271]: Positive Trust Anchors: Sep 13 00:26:38.038438 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:26:38.038470 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:26:38.044523 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 13 00:26:38.046465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:26:38.047374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:26:38.131724 kernel: SCSI subsystem initialized Sep 13 00:26:38.136718 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:26:38.144744 kernel: iscsi: registered transport (tcp) Sep 13 00:26:38.158777 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:26:38.158868 kernel: QLogic iSCSI HBA Driver Sep 13 00:26:38.219058 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:26:38.225871 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:26:38.245947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:26:38.246010 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:26:38.246959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:26:38.297063 kernel: raid6: neonx8 gen() 15637 MB/s Sep 13 00:26:38.313742 kernel: raid6: neonx4 gen() 15574 MB/s Sep 13 00:26:38.330770 kernel: raid6: neonx2 gen() 13189 MB/s Sep 13 00:26:38.347755 kernel: raid6: neonx1 gen() 10438 MB/s Sep 13 00:26:38.364725 kernel: raid6: int64x8 gen() 6918 MB/s Sep 13 00:26:38.381754 kernel: raid6: int64x4 gen() 7302 MB/s Sep 13 00:26:38.398724 kernel: raid6: int64x2 gen() 6086 MB/s Sep 13 00:26:38.415767 kernel: raid6: int64x1 gen() 5021 MB/s Sep 13 00:26:38.415922 kernel: raid6: using algorithm neonx8 gen() 15637 MB/s Sep 13 00:26:38.432759 kernel: raid6: .... xor() 11943 MB/s, rmw enabled Sep 13 00:26:38.432836 kernel: raid6: using neon recovery algorithm Sep 13 00:26:38.443319 kernel: xor: measuring software checksum speed Sep 13 00:26:38.443389 kernel: 8regs : 19802 MB/sec Sep 13 00:26:38.443400 kernel: 32regs : 19664 MB/sec Sep 13 00:26:38.444330 kernel: arm64_neon : 12088 MB/sec Sep 13 00:26:38.444363 kernel: xor: using function: 8regs (19802 MB/sec) Sep 13 00:26:38.501766 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:26:38.522952 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:26:38.530102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:26:38.560627 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 13 00:26:38.565344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:26:38.574194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:26:38.590594 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Sep 13 00:26:38.628248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:26:38.633947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:26:38.694808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:26:38.706014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:26:38.736912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:26:38.739413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:26:38.740975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:26:38.742391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:26:38.752086 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:26:38.774130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:26:38.819125 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:26:38.842244 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:26:38.842398 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:26:38.858722 kernel: ACPI: bus type USB registered Sep 13 00:26:38.859350 kernel: usbcore: registered new interface driver usbfs Sep 13 00:26:38.861772 kernel: usbcore: registered new interface driver hub Sep 13 00:26:38.861841 kernel: usbcore: registered new device driver usb Sep 13 00:26:38.872643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:26:38.872900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:26:38.878551 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 13 00:26:38.878952 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 13 00:26:38.876550 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:26:38.880702 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:26:38.877851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:26:38.878044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:38.879206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:26:38.886883 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:26:38.890101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:26:38.903591 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:26:38.903891 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:26:38.906719 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:26:38.909558 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:26:38.909836 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:26:38.909926 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:26:38.910029 kernel: hub 1-0:1.0: USB hub found Sep 13 00:26:38.910146 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:26:38.912307 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:26:38.912479 kernel: hub 2-0:1.0: USB hub found Sep 13 00:26:38.912654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:38.915754 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:26:38.924458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:26:38.938711 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 13 00:26:38.938971 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:26:38.939154 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 13 00:26:38.939264 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 13 00:26:38.939381 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:26:38.945112 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:26:38.945167 kernel: GPT:17805311 != 80003071 Sep 13 00:26:38.945178 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:26:38.945188 kernel: GPT:17805311 != 80003071 Sep 13 00:26:38.945966 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:26:38.946012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:26:38.948711 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 13 00:26:38.978759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:26:39.009719 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (511) Sep 13 00:26:39.013543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:26:39.021716 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (516) Sep 13 00:26:39.034305 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:26:39.041183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:26:39.049715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:26:39.051914 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:26:39.062928 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:26:39.078481 disk-uuid[573]: Primary Header is updated. Sep 13 00:26:39.078481 disk-uuid[573]: Secondary Entries is updated. Sep 13 00:26:39.078481 disk-uuid[573]: Secondary Header is updated. Sep 13 00:26:39.090764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:26:39.148725 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:26:39.286889 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 13 00:26:39.286952 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:26:39.287171 kernel: usbcore: registered new interface driver usbhid Sep 13 00:26:39.287717 kernel: usbhid: USB HID core driver Sep 13 00:26:39.396440 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 13 00:26:39.525787 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 13 00:26:39.579810 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 13 00:26:40.104737 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:26:40.105975 disk-uuid[574]: The operation has completed successfully. Sep 13 00:26:40.171008 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:26:40.171838 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:26:40.187916 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:26:40.196621 sh[591]: Success Sep 13 00:26:40.216861 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:26:40.283295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:26:40.297898 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:26:40.300846 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:26:40.325880 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:26:40.326175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:40.326250 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:26:40.326279 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:26:40.326305 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:26:40.333839 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:26:40.336497 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:26:40.338888 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:26:40.352994 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:26:40.355948 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:26:40.377824 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:40.377895 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:40.377906 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:26:40.385867 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:26:40.385947 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:26:40.398945 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:26:40.402033 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:40.413246 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:26:40.420999 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:26:40.482797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:26:40.491949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:26:40.520103 systemd-networkd[774]: lo: Link UP Sep 13 00:26:40.520113 systemd-networkd[774]: lo: Gained carrier Sep 13 00:26:40.523346 systemd-networkd[774]: Enumeration completed Sep 13 00:26:40.523516 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:26:40.524063 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:40.524067 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:40.526707 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:40.526710 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:40.527340 systemd[1]: Reached target network.target - Network. Sep 13 00:26:40.528963 systemd-networkd[774]: eth0: Link UP Sep 13 00:26:40.528967 systemd-networkd[774]: eth0: Gained carrier Sep 13 00:26:40.528978 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:40.535916 systemd-networkd[774]: eth1: Link UP Sep 13 00:26:40.535920 systemd-networkd[774]: eth1: Gained carrier Sep 13 00:26:40.535930 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:40.545209 ignition[703]: Ignition 2.19.0 Sep 13 00:26:40.545886 ignition[703]: Stage: fetch-offline Sep 13 00:26:40.546043 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:40.546063 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:40.546337 ignition[703]: parsed url from cmdline: "" Sep 13 00:26:40.546344 ignition[703]: no config URL provided Sep 13 00:26:40.546353 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:26:40.546368 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:26:40.549511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:26:40.546376 ignition[703]: failed to fetch config: resource requires networking Sep 13 00:26:40.546828 ignition[703]: Ignition finished successfully Sep 13 00:26:40.556099 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:26:40.573767 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:26:40.578414 ignition[782]: Ignition 2.19.0 Sep 13 00:26:40.578427 ignition[782]: Stage: fetch Sep 13 00:26:40.578654 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:40.580792 systemd-networkd[774]: eth0: DHCPv4 address 195.201.238.219/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:26:40.578665 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:40.578820 ignition[782]: parsed url from cmdline: "" Sep 13 00:26:40.578824 ignition[782]: no config URL provided Sep 13 00:26:40.578830 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:26:40.578839 ignition[782]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:26:40.578862 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:26:40.579447 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:26:40.779732 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:26:40.786019 ignition[782]: GET result: OK Sep 13 00:26:40.786182 ignition[782]: parsing config with SHA512: fcce4ffd465299ecaafe2bb4b7bf16c2b9f7e8f99963a287c845fe8c00485914fb6fecabe3163edb8906b5de87d7c1fc42dc477f9297eb7cf2b338a380c920d8 Sep 13 00:26:40.793957 unknown[782]: fetched base config from "system" Sep 13 00:26:40.793967 unknown[782]: fetched base config from "system" Sep 13 00:26:40.794435 ignition[782]: fetch: fetch complete Sep 13 00:26:40.793972 unknown[782]: fetched user config from "hetzner" Sep 13 00:26:40.794441 ignition[782]: fetch: fetch passed Sep 13 00:26:40.796548 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:26:40.794497 ignition[782]: Ignition finished successfully Sep 13 00:26:40.804073 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:26:40.819917 ignition[790]: Ignition 2.19.0 Sep 13 00:26:40.819935 ignition[790]: Stage: kargs Sep 13 00:26:40.820138 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:40.820149 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:40.821304 ignition[790]: kargs: kargs passed Sep 13 00:26:40.821372 ignition[790]: Ignition finished successfully Sep 13 00:26:40.825098 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:26:40.835524 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:26:40.848613 ignition[797]: Ignition 2.19.0 Sep 13 00:26:40.848631 ignition[797]: Stage: disks Sep 13 00:26:40.849393 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:40.849408 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:40.850590 ignition[797]: disks: disks passed Sep 13 00:26:40.850665 ignition[797]: Ignition finished successfully Sep 13 00:26:40.856322 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:26:40.858411 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:26:40.859320 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:26:40.860428 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:26:40.861607 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:26:40.862612 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:26:40.870982 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:26:40.895476 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:26:40.901347 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:26:40.909875 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:26:40.981775 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:26:40.982185 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:26:40.983833 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:26:40.992922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:26:40.998562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:26:41.001144 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:26:41.005805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:26:41.005858 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:26:41.013716 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Sep 13 00:26:41.016866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:26:41.021409 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:41.021438 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:41.021450 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:26:41.021556 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:26:41.029708 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:26:41.029809 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:26:41.037500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:26:41.093709 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:26:41.097434 coreos-metadata[815]: Sep 13 00:26:41.097 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:26:41.101236 coreos-metadata[815]: Sep 13 00:26:41.098 INFO Fetch successful Sep 13 00:26:41.101236 coreos-metadata[815]: Sep 13 00:26:41.098 INFO wrote hostname ci-4081-3-5-n-9bb66b8eb5 to /sysroot/etc/hostname Sep 13 00:26:41.103496 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:26:41.111403 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:26:41.120554 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:26:41.126152 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:26:41.244290 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:26:41.249892 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:26:41.252908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:26:41.265763 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:41.290668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:26:41.301552 ignition[930]: INFO : Ignition 2.19.0 Sep 13 00:26:41.301552 ignition[930]: INFO : Stage: mount Sep 13 00:26:41.301552 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:41.301552 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:41.301552 ignition[930]: INFO : mount: mount passed Sep 13 00:26:41.301552 ignition[930]: INFO : Ignition finished successfully Sep 13 00:26:41.305076 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:26:41.311963 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:26:41.324140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:26:41.331577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:26:41.345761 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Sep 13 00:26:41.348028 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:41.348097 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:41.348110 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:26:41.351818 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:26:41.351928 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:26:41.355222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:26:41.382275 ignition[959]: INFO : Ignition 2.19.0 Sep 13 00:26:41.382275 ignition[959]: INFO : Stage: files Sep 13 00:26:41.383809 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:41.383809 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:41.383809 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:26:41.387692 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:26:41.387692 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:26:41.390988 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:26:41.392742 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:26:41.392742 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:26:41.392561 unknown[959]: wrote ssh authorized keys file for user: core Sep 13 00:26:41.398039 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:26:41.398039 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:26:41.500111 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:26:41.667859 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:26:41.667859 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:41.670912 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:26:41.770125 systemd-networkd[774]: eth1: Gained IPv6LL Sep 13 00:26:42.000731 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:26:42.273793 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:42.273793 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:26:42.278685 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:26:42.279879 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:26:42.279879 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:26:42.279879 ignition[959]: INFO : files: files passed Sep 13 00:26:42.279879 ignition[959]: INFO : Ignition finished successfully Sep 13 00:26:42.284040 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:26:42.291920 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:26:42.297961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:26:42.303846 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:26:42.303973 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:26:42.328022 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:42.328022 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:42.332013 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:42.335006 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:26:42.336148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:26:42.345884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:26:42.384788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:26:42.385834 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:26:42.388055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:26:42.389318 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:26:42.390665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:26:42.396058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:26:42.416475 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:26:42.422923 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:26:42.456143 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:26:42.457032 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:26:42.459597 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:26:42.461132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:26:42.461313 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:26:42.463295 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:26:42.463986 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:26:42.465893 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:26:42.467506 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:26:42.468847 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:26:42.470614 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:26:42.471952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:26:42.473348 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:26:42.474588 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:26:42.475081 systemd-networkd[774]: eth0: Gained IPv6LL Sep 13 00:26:42.477387 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:26:42.478133 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:26:42.478329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:26:42.479863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:26:42.481212 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:26:42.482288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:26:42.485819 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:26:42.486793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:26:42.487133 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:26:42.489578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:26:42.489889 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:26:42.491635 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:26:42.491922 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:26:42.493993 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:26:42.494180 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:26:42.506821 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:26:42.507806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:26:42.508109 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:26:42.512758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:26:42.516007 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:26:42.516370 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:26:42.518830 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:26:42.519034 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:26:42.530481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:26:42.530599 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:26:42.536704 ignition[1010]: INFO : Ignition 2.19.0 Sep 13 00:26:42.536704 ignition[1010]: INFO : Stage: umount Sep 13 00:26:42.536704 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:42.536704 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:42.540980 ignition[1010]: INFO : umount: umount passed Sep 13 00:26:42.540980 ignition[1010]: INFO : Ignition finished successfully Sep 13 00:26:42.539643 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:26:42.539843 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:26:42.542877 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:26:42.542940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:26:42.543592 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:26:42.543637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:26:42.545699 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:26:42.545756 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:26:42.549898 systemd[1]: Stopped target network.target - Network. Sep 13 00:26:42.550756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:26:42.550839 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:26:42.552493 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:26:42.553749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:26:42.557850 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:26:42.559209 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:26:42.560727 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:26:42.561391 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:26:42.561446 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:26:42.562443 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:26:42.562491 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:26:42.563487 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:26:42.563549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:26:42.564573 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:26:42.564632 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:26:42.566013 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:26:42.566814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:26:42.570058 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:26:42.570875 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:26:42.570957 systemd-networkd[774]: eth0: DHCPv6 lease lost Sep 13 00:26:42.571012 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:26:42.572634 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:26:42.573271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:26:42.573745 systemd-networkd[774]: eth1: DHCPv6 lease lost Sep 13 00:26:42.577909 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:26:42.579788 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:26:42.584106 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:26:42.584437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:26:42.586602 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:26:42.586810 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:26:42.592939 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:26:42.594046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:26:42.594151 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:26:42.595927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:26:42.596094 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:26:42.597580 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:26:42.597640 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:26:42.599552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:26:42.599613 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:26:42.604914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:26:42.621518 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:26:42.622491 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:26:42.625818 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:26:42.625977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:26:42.627882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:26:42.627932 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:26:42.629063 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:26:42.629111 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:26:42.630278 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:26:42.630340 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:26:42.632010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:26:42.632069 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:26:42.633835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:26:42.633898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:26:42.640941 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:26:42.641658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:26:42.641756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:26:42.642643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:26:42.645756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:42.648199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:26:42.648374 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:26:42.650083 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:26:42.662167 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:26:42.674435 systemd[1]: Switching root. Sep 13 00:26:42.716528 systemd-journald[236]: Journal stopped Sep 13 00:26:43.738774 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 13 00:26:43.738860 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:26:43.738874 kernel: SELinux: policy capability open_perms=1 Sep 13 00:26:43.738888 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:26:43.738897 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:26:43.738911 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:26:43.738922 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:26:43.738931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:26:43.738941 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:26:43.738950 kernel: audit: type=1403 audit(1757723202.893:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:26:43.738961 systemd[1]: Successfully loaded SELinux policy in 40.781ms. Sep 13 00:26:43.738983 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.776ms. Sep 13 00:26:43.738995 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:26:43.739013 systemd[1]: Detected virtualization kvm. Sep 13 00:26:43.739027 systemd[1]: Detected architecture arm64. Sep 13 00:26:43.739038 systemd[1]: Detected first boot. Sep 13 00:26:43.739048 systemd[1]: Hostname set to . Sep 13 00:26:43.739059 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:26:43.739069 zram_generator::config[1052]: No configuration found. Sep 13 00:26:43.739080 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:26:43.739090 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:26:43.739103 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:26:43.739113 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:26:43.739124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:26:43.739136 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:26:43.739146 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:26:43.739157 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:26:43.739167 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:26:43.739177 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:26:43.739188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:26:43.739201 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:26:43.739249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:26:43.739263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:26:43.739274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:26:43.739285 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:26:43.739295 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:26:43.739306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:26:43.739317 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:26:43.739332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:26:43.739346 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:26:43.739360 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:26:43.739371 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:26:43.739381 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:26:43.739391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:26:43.739402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:26:43.739414 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:26:43.739425 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:26:43.739436 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:26:43.739446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:26:43.739456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:26:43.739467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:26:43.739477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:26:43.739487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:26:43.739498 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:26:43.739516 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:26:43.739619 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:26:43.739634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:26:43.739644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:26:43.739655 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:26:43.741867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:26:43.741907 systemd[1]: Reached target machines.target - Containers. Sep 13 00:26:43.741918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:26:43.741930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:43.741950 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:26:43.741962 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:26:43.741973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:43.741983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:26:43.741997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:43.742011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:26:43.742021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:43.742032 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:26:43.742043 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:26:43.742053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:26:43.742069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:26:43.742081 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:26:43.742091 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:26:43.742102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:26:43.742114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:26:43.742125 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:26:43.742135 kernel: loop: module loaded Sep 13 00:26:43.742147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:26:43.742157 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:26:43.742167 systemd[1]: Stopped verity-setup.service. Sep 13 00:26:43.742179 kernel: ACPI: bus type drm_connector registered Sep 13 00:26:43.742189 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:26:43.742201 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:26:43.742254 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:26:43.742269 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:26:43.742280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:26:43.742291 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:26:43.742305 kernel: fuse: init (API version 7.39) Sep 13 00:26:43.742315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:26:43.742326 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:26:43.742337 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:26:43.742348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:43.742359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:43.742369 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:26:43.742379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:26:43.742390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:43.742445 systemd-journald[1122]: Collecting audit messages is disabled. Sep 13 00:26:43.742472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:43.742483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:26:43.742494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:26:43.742506 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:43.742517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:43.742529 systemd-journald[1122]: Journal started Sep 13 00:26:43.742553 systemd-journald[1122]: Runtime Journal (/run/log/journal/efdbb3bf743449d5977c3c25d192833d) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:26:43.428066 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:26:43.449196 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:26:43.449654 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:26:43.745603 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:26:43.745749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:26:43.769736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:26:43.779380 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:26:43.789864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:26:43.801161 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:26:43.804854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:26:43.804902 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:26:43.808275 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:26:43.813411 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:26:43.816917 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:26:43.818951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:43.831289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:26:43.834015 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:26:43.835231 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:43.837995 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:26:43.840033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:26:43.844957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:26:43.849735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:26:43.852276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:26:43.853424 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:26:43.854545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:26:43.856035 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:26:43.869245 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:26:43.877050 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:26:43.882601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:26:43.890942 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:26:43.912175 systemd-journald[1122]: Time spent on flushing to /var/log/journal/efdbb3bf743449d5977c3c25d192833d is 32.367ms for 1124 entries. Sep 13 00:26:43.912175 systemd-journald[1122]: System Journal (/var/log/journal/efdbb3bf743449d5977c3c25d192833d) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:26:43.975817 systemd-journald[1122]: Received client request to flush runtime journal. Sep 13 00:26:43.975905 kernel: loop0: detected capacity change from 0 to 114328 Sep 13 00:26:43.914769 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:26:43.916470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:26:43.921130 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:26:43.929427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:26:43.961356 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:26:43.980187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:26:43.987932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:26:43.992701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:26:43.996834 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:26:44.016877 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 00:26:44.055756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:26:44.064072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:26:44.083723 kernel: loop2: detected capacity change from 0 to 8 Sep 13 00:26:44.119718 kernel: loop3: detected capacity change from 0 to 114432 Sep 13 00:26:44.154288 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 13 00:26:44.154308 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 13 00:26:44.166305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:26:44.176784 kernel: loop4: detected capacity change from 0 to 114328 Sep 13 00:26:44.207718 kernel: loop5: detected capacity change from 0 to 203944 Sep 13 00:26:44.240739 kernel: loop6: detected capacity change from 0 to 8 Sep 13 00:26:44.244706 kernel: loop7: detected capacity change from 0 to 114432 Sep 13 00:26:44.265839 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:26:44.266922 (sd-merge)[1192]: Merged extensions into '/usr'. Sep 13 00:26:44.274416 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:26:44.274437 systemd[1]: Reloading... Sep 13 00:26:44.405751 zram_generator::config[1217]: No configuration found. Sep 13 00:26:44.614756 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:26:44.629814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:26:44.683498 systemd[1]: Reloading finished in 408 ms. Sep 13 00:26:44.709537 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:26:44.711172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:26:44.712576 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:26:44.722908 systemd[1]: Starting ensure-sysext.service... Sep 13 00:26:44.725959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:26:44.736929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:26:44.744760 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:26:44.744778 systemd[1]: Reloading... Sep 13 00:26:44.768587 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Sep 13 00:26:44.778410 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:26:44.779051 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:26:44.784501 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:26:44.784805 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Sep 13 00:26:44.784850 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Sep 13 00:26:44.789086 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:26:44.789099 systemd-tmpfiles[1257]: Skipping /boot Sep 13 00:26:44.805415 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:26:44.805429 systemd-tmpfiles[1257]: Skipping /boot Sep 13 00:26:44.870021 zram_generator::config[1295]: No configuration found. Sep 13 00:26:45.051490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:26:45.125706 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1322) Sep 13 00:26:45.131215 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:26:45.131540 systemd[1]: Reloading finished in 386 ms. Sep 13 00:26:45.148089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:26:45.150815 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:26:45.160700 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:26:45.194212 systemd[1]: Finished ensure-sysext.service. Sep 13 00:26:45.224014 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:26:45.229423 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:26:45.231130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:45.255033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:45.259927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:26:45.264456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:45.269946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:45.271418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:45.275971 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:26:45.281959 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:26:45.291973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:26:45.298979 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:26:45.307062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:26:45.309165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:45.310620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:45.313538 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:26:45.316789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:45.319839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:45.322933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:45.329405 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 13 00:26:45.329519 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:26:45.329535 kernel: [drm] features: -context_init Sep 13 00:26:45.331033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:45.333053 kernel: [drm] number of scanouts: 1 Sep 13 00:26:45.333149 kernel: [drm] number of cap sets: 0 Sep 13 00:26:45.332547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:45.332661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:45.338700 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:26:45.350319 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:26:45.372030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:26:45.376607 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:26:45.405619 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:26:45.417805 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:26:45.421591 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:26:45.422763 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:26:45.426748 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:26:45.448969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:26:45.450693 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:45.450930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:45.465930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:26:45.503331 augenrules[1404]: No rules Sep 13 00:26:45.504401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:26:45.506873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:26:45.507431 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:26:45.508633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:45.516489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:45.518994 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:26:45.545045 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:26:45.549389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:26:45.559237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:26:45.561253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:26:45.632283 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:26:45.649004 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:26:45.656986 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:26:45.658054 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:26:45.673879 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:26:45.685040 systemd-resolved[1373]: Positive Trust Anchors: Sep 13 00:26:45.685057 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:26:45.685090 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:26:45.685591 systemd-networkd[1372]: lo: Link UP Sep 13 00:26:45.685594 systemd-networkd[1372]: lo: Gained carrier Sep 13 00:26:45.689510 systemd-networkd[1372]: Enumeration completed Sep 13 00:26:45.689852 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:26:45.692354 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:45.692358 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:45.697769 systemd-resolved[1373]: Using system hostname 'ci-4081-3-5-n-9bb66b8eb5'. Sep 13 00:26:45.699167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:26:45.700325 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:26:45.701348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:26:45.702300 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:45.702308 systemd-networkd[1372]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:45.703602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:26:45.704992 systemd[1]: Reached target network.target - Network. Sep 13 00:26:45.705537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:26:45.706759 systemd-networkd[1372]: eth0: Link UP Sep 13 00:26:45.706769 systemd-networkd[1372]: eth0: Gained carrier Sep 13 00:26:45.706796 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:45.711485 systemd-networkd[1372]: eth1: Link UP Sep 13 00:26:45.713739 systemd-networkd[1372]: eth1: Gained carrier Sep 13 00:26:45.713778 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:45.715090 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:26:45.723638 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:26:45.726514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:45.728111 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:26:45.729445 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:26:45.730373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:26:45.731382 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:26:45.732152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:26:45.732906 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:26:45.733622 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:26:45.733665 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:26:45.734270 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:26:45.736466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:26:45.739138 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:26:45.747453 systemd-networkd[1372]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:26:45.749614 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 13 00:26:45.749958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:26:45.753300 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:26:45.755804 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:26:45.757710 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:26:45.758561 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:26:45.759664 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:26:45.759709 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:26:45.763845 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:26:45.765830 systemd-networkd[1372]: eth0: DHCPv4 address 195.201.238.219/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:26:45.769319 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 13 00:26:45.772068 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:26:45.778247 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:26:45.788872 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:26:45.794968 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:26:45.795695 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:26:45.800985 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:26:45.803405 jq[1442]: false Sep 13 00:26:45.808883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:26:45.814969 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:26:45.819352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:26:45.827042 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:26:45.830892 coreos-metadata[1438]: Sep 13 00:26:45.830 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:26:45.832476 coreos-metadata[1438]: Sep 13 00:26:45.831 INFO Fetch successful Sep 13 00:26:45.832476 coreos-metadata[1438]: Sep 13 00:26:45.831 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:26:45.834379 coreos-metadata[1438]: Sep 13 00:26:45.832 INFO Fetch successful Sep 13 00:26:45.834965 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:26:45.839866 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:26:45.840536 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:26:45.843962 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:26:45.844638 dbus-daemon[1439]: [system] SELinux support is enabled Sep 13 00:26:45.855990 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:26:45.857477 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:26:45.871336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:26:45.872777 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:26:45.881626 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:26:45.881703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:26:45.885896 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:26:45.885934 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:26:45.900946 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:26:45.901228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:26:45.903282 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:26:45.903706 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:26:45.928896 jq[1453]: true Sep 13 00:26:45.948500 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:26:45.954766 tar[1460]: linux-arm64/helm Sep 13 00:26:45.961596 extend-filesystems[1443]: Found loop4 Sep 13 00:26:45.961596 extend-filesystems[1443]: Found loop5 Sep 13 00:26:45.961596 extend-filesystems[1443]: Found loop6 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found loop7 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda1 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda2 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda3 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found usr Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda4 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda6 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda7 Sep 13 00:26:45.976771 extend-filesystems[1443]: Found sda9 Sep 13 00:26:45.976771 extend-filesystems[1443]: Checking size of /dev/sda9 Sep 13 00:26:45.981187 jq[1475]: true Sep 13 00:26:45.985929 update_engine[1451]: I20250913 00:26:45.980650 1451 main.cc:92] Flatcar Update Engine starting Sep 13 00:26:46.009103 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:26:46.010038 update_engine[1451]: I20250913 00:26:46.009939 1451 update_check_scheduler.cc:74] Next update check in 4m12s Sep 13 00:26:46.019978 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:26:46.031807 extend-filesystems[1443]: Resized partition /dev/sda9 Sep 13 00:26:46.045366 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:26:46.061448 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:26:46.109808 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:26:46.112048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:26:46.160454 systemd-logind[1449]: New seat seat0. Sep 13 00:26:46.170946 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:26:46.170975 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 13 00:26:46.171496 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:26:46.197703 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1322) Sep 13 00:26:46.262906 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:26:46.265846 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:26:46.278079 systemd[1]: Starting sshkeys.service... Sep 13 00:26:46.301951 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:26:46.318088 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:26:46.328578 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:26:46.332337 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:26:46.350193 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:26:46.368800 extend-filesystems[1490]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:26:46.368800 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:26:46.368800 extend-filesystems[1490]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:26:46.382927 extend-filesystems[1443]: Resized filesystem in /dev/sda9 Sep 13 00:26:46.382927 extend-filesystems[1443]: Found sr0 Sep 13 00:26:46.373728 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:26:46.374862 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:26:46.389568 coreos-metadata[1521]: Sep 13 00:26:46.388 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:26:46.391486 coreos-metadata[1521]: Sep 13 00:26:46.390 INFO Fetch successful Sep 13 00:26:46.400458 unknown[1521]: wrote ssh authorized keys file for user: core Sep 13 00:26:46.411387 containerd[1470]: time="2025-09-13T00:26:46.408826560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:26:46.444867 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:26:46.444157 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:26:46.450744 systemd[1]: Finished sshkeys.service. Sep 13 00:26:46.493034 containerd[1470]: time="2025-09-13T00:26:46.492601880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.496821 containerd[1470]: time="2025-09-13T00:26:46.496763080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497229080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497262640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497489000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497509320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497576760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:46.497785 containerd[1470]: time="2025-09-13T00:26:46.497590080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498335440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498361640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498376960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498386520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498492040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499290 containerd[1470]: time="2025-09-13T00:26:46.498736400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:46.499900 containerd[1470]: time="2025-09-13T00:26:46.499872600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:46.501122 containerd[1470]: time="2025-09-13T00:26:46.500698600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:26:46.501122 containerd[1470]: time="2025-09-13T00:26:46.500836080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:26:46.501122 containerd[1470]: time="2025-09-13T00:26:46.500901080Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:26:46.508863 containerd[1470]: time="2025-09-13T00:26:46.508814520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:26:46.509075 containerd[1470]: time="2025-09-13T00:26:46.509060440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:26:46.510072 containerd[1470]: time="2025-09-13T00:26:46.509734400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:26:46.510072 containerd[1470]: time="2025-09-13T00:26:46.509775200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:26:46.510072 containerd[1470]: time="2025-09-13T00:26:46.509793200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:26:46.510072 containerd[1470]: time="2025-09-13T00:26:46.510001280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511071240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511313760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511335760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511363720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511401920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511419720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511434840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511452240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511468960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511484680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511498440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511512840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511595640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.512027 containerd[1470]: time="2025-09-13T00:26:46.511617400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.512470 containerd[1470]: time="2025-09-13T00:26:46.511630800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.512470 containerd[1470]: time="2025-09-13T00:26:46.511646000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512827080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512874080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512889760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512905120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512934080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512957720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512975480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.512993000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.513007240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.513026200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.513056080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.513069160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.513695 containerd[1470]: time="2025-09-13T00:26:46.513081320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.514067520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515176960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515202160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515215320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515225480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515241640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515253800Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:26:46.516567 containerd[1470]: time="2025-09-13T00:26:46.515269800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:26:46.516853 containerd[1470]: time="2025-09-13T00:26:46.515650600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:26:46.516853 containerd[1470]: time="2025-09-13T00:26:46.515746280Z" level=info msg="Connect containerd service" Sep 13 00:26:46.516853 containerd[1470]: time="2025-09-13T00:26:46.515790520Z" level=info msg="using legacy CRI server" Sep 13 00:26:46.516853 containerd[1470]: time="2025-09-13T00:26:46.515797160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:26:46.516853 containerd[1470]: time="2025-09-13T00:26:46.515922240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:26:46.518293 containerd[1470]: time="2025-09-13T00:26:46.518141880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:26:46.520083 containerd[1470]: time="2025-09-13T00:26:46.520019800Z" level=info msg="Start subscribing containerd event" Sep 13 00:26:46.521231 containerd[1470]: time="2025-09-13T00:26:46.520587640Z" level=info msg="Start recovering state" Sep 13 00:26:46.521231 containerd[1470]: time="2025-09-13T00:26:46.520796040Z" level=info msg="Start event monitor" Sep 13 00:26:46.521231 containerd[1470]: time="2025-09-13T00:26:46.520815240Z" level=info msg="Start snapshots syncer" Sep 13 00:26:46.521231 containerd[1470]: time="2025-09-13T00:26:46.520825960Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:26:46.521231 containerd[1470]: time="2025-09-13T00:26:46.520836440Z" level=info msg="Start streaming server" Sep 13 00:26:46.523624 containerd[1470]: time="2025-09-13T00:26:46.522326440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:26:46.523624 containerd[1470]: time="2025-09-13T00:26:46.522427240Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:26:46.522623 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:26:46.525853 containerd[1470]: time="2025-09-13T00:26:46.524033800Z" level=info msg="containerd successfully booted in 0.120225s" Sep 13 00:26:46.814131 tar[1460]: linux-arm64/LICENSE Sep 13 00:26:46.814336 tar[1460]: linux-arm64/README.md Sep 13 00:26:46.829604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:26:46.889874 systemd-networkd[1372]: eth0: Gained IPv6LL Sep 13 00:26:46.890578 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 13 00:26:46.895493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:26:46.897599 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:26:46.908893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:46.922887 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:26:46.953062 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:26:46.965253 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:26:46.989534 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:26:46.998342 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:26:47.009469 systemd[1]: Started sshd@0-195.201.238.219:22-107.175.39.180:50952.service - OpenSSH per-connection server daemon (107.175.39.180:50952). Sep 13 00:26:47.017667 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:26:47.018135 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:26:47.028922 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:26:47.054782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:26:47.063085 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:26:47.072623 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:26:47.074137 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:26:47.594440 systemd-networkd[1372]: eth1: Gained IPv6LL Sep 13 00:26:47.595005 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 13 00:26:47.888998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:47.891757 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:26:47.893010 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:47.897539 systemd[1]: Startup finished in 847ms (kernel) + 5.196s (initrd) + 5.044s (userspace) = 11.087s. Sep 13 00:26:48.565870 kubelet[1571]: E0913 00:26:48.565764 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:48.569201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:48.569406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:50.679213 sshd[1556]: Connection closed by authenticating user root 107.175.39.180 port 50952 [preauth] Sep 13 00:26:50.680300 systemd[1]: sshd@0-195.201.238.219:22-107.175.39.180:50952.service: Deactivated successfully. Sep 13 00:26:50.868128 systemd[1]: Started sshd@1-195.201.238.219:22-107.175.39.180:37096.service - OpenSSH per-connection server daemon (107.175.39.180:37096). Sep 13 00:26:54.488731 sshd[1586]: Connection closed by authenticating user root 107.175.39.180 port 37096 [preauth] Sep 13 00:26:54.491450 systemd[1]: sshd@1-195.201.238.219:22-107.175.39.180:37096.service: Deactivated successfully. Sep 13 00:26:54.637446 systemd[1]: Started sshd@2-195.201.238.219:22-107.175.39.180:37104.service - OpenSSH per-connection server daemon (107.175.39.180:37104). Sep 13 00:26:58.570498 sshd[1591]: Connection closed by authenticating user root 107.175.39.180 port 37104 [preauth] Sep 13 00:26:58.572263 systemd[1]: sshd@2-195.201.238.219:22-107.175.39.180:37104.service: Deactivated successfully. Sep 13 00:26:58.576530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:26:58.588529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:58.753171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:58.753428 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:58.807588 kubelet[1603]: E0913 00:26:58.807447 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:58.816103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:58.816271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:58.825882 systemd[1]: Started sshd@3-195.201.238.219:22-107.175.39.180:50238.service - OpenSSH per-connection server daemon (107.175.39.180:50238). Sep 13 00:27:03.381127 sshd[1612]: Connection closed by authenticating user root 107.175.39.180 port 50238 [preauth] Sep 13 00:27:03.384012 systemd[1]: sshd@3-195.201.238.219:22-107.175.39.180:50238.service: Deactivated successfully. Sep 13 00:27:03.591086 systemd[1]: Started sshd@4-195.201.238.219:22-107.175.39.180:50254.service - OpenSSH per-connection server daemon (107.175.39.180:50254). Sep 13 00:27:08.403939 sshd[1617]: Connection closed by authenticating user root 107.175.39.180 port 50254 [preauth] Sep 13 00:27:08.404972 systemd[1]: sshd@4-195.201.238.219:22-107.175.39.180:50254.service: Deactivated successfully. Sep 13 00:27:08.583190 systemd[1]: Started sshd@5-195.201.238.219:22-107.175.39.180:60434.service - OpenSSH per-connection server daemon (107.175.39.180:60434). Sep 13 00:27:08.888616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:27:08.896133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:09.063027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:09.063168 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:09.113008 kubelet[1631]: E0913 00:27:09.112959 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:09.115368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:09.115498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:11.971590 sshd[1622]: Connection closed by authenticating user root 107.175.39.180 port 60434 [preauth] Sep 13 00:27:11.973645 systemd[1]: sshd@5-195.201.238.219:22-107.175.39.180:60434.service: Deactivated successfully. Sep 13 00:27:12.240284 systemd[1]: Started sshd@6-195.201.238.219:22-107.175.39.180:60464.service - OpenSSH per-connection server daemon (107.175.39.180:60464). Sep 13 00:27:15.782201 systemd[1]: Started sshd@7-195.201.238.219:22-147.75.109.163:46906.service - OpenSSH per-connection server daemon (147.75.109.163:46906). Sep 13 00:27:16.293169 sshd[1642]: Connection closed by authenticating user root 107.175.39.180 port 60464 [preauth] Sep 13 00:27:16.292166 systemd[1]: sshd@6-195.201.238.219:22-107.175.39.180:60464.service: Deactivated successfully. Sep 13 00:27:16.446844 systemd[1]: Started sshd@8-195.201.238.219:22-107.175.39.180:55272.service - OpenSSH per-connection server daemon (107.175.39.180:55272). Sep 13 00:27:16.780772 sshd[1645]: Accepted publickey for core from 147.75.109.163 port 46906 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:16.782859 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:16.797861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:27:16.804227 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:27:16.807781 systemd-logind[1449]: New session 1 of user core. Sep 13 00:27:16.818647 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:27:16.828141 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:27:16.833403 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:27:16.944480 systemd[1653]: Queued start job for default target default.target. Sep 13 00:27:16.957585 systemd[1653]: Created slice app.slice - User Application Slice. Sep 13 00:27:16.957645 systemd[1653]: Reached target paths.target - Paths. Sep 13 00:27:16.957697 systemd[1653]: Reached target timers.target - Timers. Sep 13 00:27:16.960088 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:27:16.977056 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:27:16.977209 systemd[1653]: Reached target sockets.target - Sockets. Sep 13 00:27:16.977225 systemd[1653]: Reached target basic.target - Basic System. Sep 13 00:27:16.977290 systemd[1653]: Reached target default.target - Main User Target. Sep 13 00:27:16.977324 systemd[1653]: Startup finished in 137ms. Sep 13 00:27:16.977747 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:27:16.985121 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:27:17.685034 systemd-timesyncd[1374]: Contacted time server 85.215.189.120:123 (2.flatcar.pool.ntp.org). Sep 13 00:27:17.685127 systemd-timesyncd[1374]: Initial clock synchronization to Sat 2025-09-13 00:27:18.078986 UTC. Sep 13 00:27:17.687202 systemd[1]: Started sshd@9-195.201.238.219:22-147.75.109.163:46918.service - OpenSSH per-connection server daemon (147.75.109.163:46918). Sep 13 00:27:18.696902 sshd[1665]: Accepted publickey for core from 147.75.109.163 port 46918 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:18.700027 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:18.706511 systemd-logind[1449]: New session 2 of user core. Sep 13 00:27:18.714076 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:27:19.138461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:27:19.144217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:19.306067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:19.322942 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:19.380030 kubelet[1677]: E0913 00:27:19.379882 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:19.382696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:19.382947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:19.420226 sshd[1665]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:19.430250 systemd[1]: sshd@9-195.201.238.219:22-147.75.109.163:46918.service: Deactivated successfully. Sep 13 00:27:19.432335 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:27:19.435913 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:27:19.438865 systemd-logind[1449]: Removed session 2. Sep 13 00:27:19.603102 systemd[1]: Started sshd@10-195.201.238.219:22-147.75.109.163:46930.service - OpenSSH per-connection server daemon (147.75.109.163:46930). Sep 13 00:27:19.918598 sshd[1650]: Connection closed by authenticating user root 107.175.39.180 port 55272 [preauth] Sep 13 00:27:19.921603 systemd[1]: sshd@8-195.201.238.219:22-107.175.39.180:55272.service: Deactivated successfully. Sep 13 00:27:20.125252 systemd[1]: Started sshd@11-195.201.238.219:22-107.175.39.180:55280.service - OpenSSH per-connection server daemon (107.175.39.180:55280). Sep 13 00:27:20.637276 sshd[1688]: Accepted publickey for core from 147.75.109.163 port 46930 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:20.639918 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:20.646858 systemd-logind[1449]: New session 3 of user core. Sep 13 00:27:20.655102 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:27:21.345338 sshd[1688]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:21.350132 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:27:21.351405 systemd[1]: sshd@10-195.201.238.219:22-147.75.109.163:46930.service: Deactivated successfully. Sep 13 00:27:21.354559 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:27:21.358526 systemd-logind[1449]: Removed session 3. Sep 13 00:27:21.529899 systemd[1]: Started sshd@12-195.201.238.219:22-147.75.109.163:42392.service - OpenSSH per-connection server daemon (147.75.109.163:42392). Sep 13 00:27:22.528360 sshd[1700]: Accepted publickey for core from 147.75.109.163 port 42392 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:22.531270 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:22.538045 systemd-logind[1449]: New session 4 of user core. Sep 13 00:27:22.545113 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:27:23.226233 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:23.232632 systemd[1]: sshd@12-195.201.238.219:22-147.75.109.163:42392.service: Deactivated successfully. Sep 13 00:27:23.235069 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:27:23.237657 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:27:23.239276 systemd-logind[1449]: Removed session 4. Sep 13 00:27:23.401422 systemd[1]: Started sshd@13-195.201.238.219:22-147.75.109.163:42406.service - OpenSSH per-connection server daemon (147.75.109.163:42406). Sep 13 00:27:23.672799 sshd[1693]: Connection closed by authenticating user root 107.175.39.180 port 55280 [preauth] Sep 13 00:27:23.675381 systemd[1]: sshd@11-195.201.238.219:22-107.175.39.180:55280.service: Deactivated successfully. Sep 13 00:27:23.924172 systemd[1]: Started sshd@14-195.201.238.219:22-107.175.39.180:55296.service - OpenSSH per-connection server daemon (107.175.39.180:55296). Sep 13 00:27:24.421753 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 42406 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:24.424067 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:24.430847 systemd-logind[1449]: New session 5 of user core. Sep 13 00:27:24.439115 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:27:24.977255 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:27:24.978208 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:27:24.998070 sudo[1714]: pam_unix(sudo:session): session closed for user root Sep 13 00:27:25.163356 sshd[1707]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:25.170035 systemd[1]: sshd@13-195.201.238.219:22-147.75.109.163:42406.service: Deactivated successfully. Sep 13 00:27:25.172677 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:27:25.175024 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:27:25.176994 systemd-logind[1449]: Removed session 5. Sep 13 00:27:25.349049 systemd[1]: Started sshd@15-195.201.238.219:22-147.75.109.163:42410.service - OpenSSH per-connection server daemon (147.75.109.163:42410). Sep 13 00:27:26.349220 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 42410 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:26.351200 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:26.359268 systemd-logind[1449]: New session 6 of user core. Sep 13 00:27:26.366621 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:27:26.878287 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:27:26.878834 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:27:26.892046 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 13 00:27:26.900933 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:27:26.901385 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:27:26.932895 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:27:26.938986 auditctl[1727]: No rules Sep 13 00:27:26.940485 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:27:26.942156 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:27:26.951702 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:27:27.006734 augenrules[1745]: No rules Sep 13 00:27:27.008261 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:27:27.010246 sudo[1723]: pam_unix(sudo:session): session closed for user root Sep 13 00:27:27.171639 sshd[1720]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:27.176812 systemd[1]: sshd@15-195.201.238.219:22-147.75.109.163:42410.service: Deactivated successfully. Sep 13 00:27:27.179269 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:27:27.181955 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:27:27.183247 systemd-logind[1449]: Removed session 6. Sep 13 00:27:27.348110 systemd[1]: Started sshd@16-195.201.238.219:22-147.75.109.163:42422.service - OpenSSH per-connection server daemon (147.75.109.163:42422). Sep 13 00:27:28.349309 sshd[1712]: Connection closed by authenticating user root 107.175.39.180 port 55296 [preauth] Sep 13 00:27:28.352376 systemd[1]: sshd@14-195.201.238.219:22-107.175.39.180:55296.service: Deactivated successfully. Sep 13 00:27:28.358541 sshd[1753]: Accepted publickey for core from 147.75.109.163 port 42422 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:27:28.360522 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:28.370816 systemd-logind[1449]: New session 7 of user core. Sep 13 00:27:28.379163 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:27:28.613386 systemd[1]: Started sshd@17-195.201.238.219:22-107.175.39.180:58782.service - OpenSSH per-connection server daemon (107.175.39.180:58782). Sep 13 00:27:28.888085 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:27:28.888378 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:27:29.249182 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:27:29.250982 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:27:29.389486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:27:29.401533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:29.595137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:29.601395 dockerd[1776]: time="2025-09-13T00:27:29.601242297Z" level=info msg="Starting up" Sep 13 00:27:29.605235 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:29.683399 kubelet[1792]: E0913 00:27:29.683328 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:29.686746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:29.687181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:29.738652 dockerd[1776]: time="2025-09-13T00:27:29.738590110Z" level=info msg="Loading containers: start." Sep 13 00:27:29.891024 kernel: Initializing XFRM netlink socket Sep 13 00:27:29.991324 systemd-networkd[1372]: docker0: Link UP Sep 13 00:27:30.016760 dockerd[1776]: time="2025-09-13T00:27:30.016662877Z" level=info msg="Loading containers: done." Sep 13 00:27:30.042719 dockerd[1776]: time="2025-09-13T00:27:30.041602203Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:27:30.042719 dockerd[1776]: time="2025-09-13T00:27:30.042177011Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:27:30.042719 dockerd[1776]: time="2025-09-13T00:27:30.042343770Z" level=info msg="Daemon has completed initialization" Sep 13 00:27:30.096658 dockerd[1776]: time="2025-09-13T00:27:30.095394595Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:27:30.095827 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:27:30.812365 update_engine[1451]: I20250913 00:27:30.812000 1451 update_attempter.cc:509] Updating boot flags... Sep 13 00:27:30.862836 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1937) Sep 13 00:27:31.281690 containerd[1470]: time="2025-09-13T00:27:31.281618666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:27:31.974343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181190865.mount: Deactivated successfully. Sep 13 00:27:32.373136 sshd[1759]: Connection closed by authenticating user root 107.175.39.180 port 58782 [preauth] Sep 13 00:27:32.376456 systemd[1]: sshd@17-195.201.238.219:22-107.175.39.180:58782.service: Deactivated successfully. Sep 13 00:27:32.638563 systemd[1]: Started sshd@18-195.201.238.219:22-107.175.39.180:58798.service - OpenSSH per-connection server daemon (107.175.39.180:58798). Sep 13 00:27:33.010868 containerd[1470]: time="2025-09-13T00:27:33.009777445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:33.013705 containerd[1470]: time="2025-09-13T00:27:33.013611086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687423" Sep 13 00:27:33.015633 containerd[1470]: time="2025-09-13T00:27:33.015547234Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:33.021646 containerd[1470]: time="2025-09-13T00:27:33.021561236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:33.023625 containerd[1470]: time="2025-09-13T00:27:33.022658483Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.740983316s" Sep 13 00:27:33.023625 containerd[1470]: time="2025-09-13T00:27:33.022739337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:27:33.025318 containerd[1470]: time="2025-09-13T00:27:33.025271623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:27:34.121504 containerd[1470]: time="2025-09-13T00:27:34.119910759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:34.121504 containerd[1470]: time="2025-09-13T00:27:34.121442096Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459787" Sep 13 00:27:34.122312 containerd[1470]: time="2025-09-13T00:27:34.122270005Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:34.129371 containerd[1470]: time="2025-09-13T00:27:34.129303005Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.103978968s" Sep 13 00:27:34.129665 containerd[1470]: time="2025-09-13T00:27:34.129615532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:27:34.129869 containerd[1470]: time="2025-09-13T00:27:34.129571719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:34.131441 containerd[1470]: time="2025-09-13T00:27:34.131395065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:27:35.094111 containerd[1470]: time="2025-09-13T00:27:35.093988908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:35.095635 containerd[1470]: time="2025-09-13T00:27:35.095295686Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127526" Sep 13 00:27:35.096781 containerd[1470]: time="2025-09-13T00:27:35.096741931Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:35.101481 containerd[1470]: time="2025-09-13T00:27:35.101419059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:35.102704 containerd[1470]: time="2025-09-13T00:27:35.102612745Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 970.872249ms" Sep 13 00:27:35.102704 containerd[1470]: time="2025-09-13T00:27:35.102659381Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:27:35.103706 containerd[1470]: time="2025-09-13T00:27:35.103341114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:27:36.063852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607026797.mount: Deactivated successfully. Sep 13 00:27:36.353051 containerd[1470]: time="2025-09-13T00:27:36.352798874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:36.355656 containerd[1470]: time="2025-09-13T00:27:36.355575748Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954933" Sep 13 00:27:36.356719 containerd[1470]: time="2025-09-13T00:27:36.356623554Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:36.359742 containerd[1470]: time="2025-09-13T00:27:36.359617230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:36.361218 containerd[1470]: time="2025-09-13T00:27:36.360614291Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.257169661s" Sep 13 00:27:36.361218 containerd[1470]: time="2025-09-13T00:27:36.360661259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:27:36.361567 containerd[1470]: time="2025-09-13T00:27:36.361518702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:27:37.065751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717567606.mount: Deactivated successfully. Sep 13 00:27:37.779299 containerd[1470]: time="2025-09-13T00:27:37.778018555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:37.782830 containerd[1470]: time="2025-09-13T00:27:37.782762453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 13 00:27:37.786505 containerd[1470]: time="2025-09-13T00:27:37.786436115Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:37.791787 containerd[1470]: time="2025-09-13T00:27:37.791728983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:37.793272 containerd[1470]: time="2025-09-13T00:27:37.793174636Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.431614317s" Sep 13 00:27:37.793400 containerd[1470]: time="2025-09-13T00:27:37.793277475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:27:37.793914 containerd[1470]: time="2025-09-13T00:27:37.793885917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:27:38.361811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267664195.mount: Deactivated successfully. Sep 13 00:27:38.377457 containerd[1470]: time="2025-09-13T00:27:38.376200001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:38.381662 containerd[1470]: time="2025-09-13T00:27:38.381537060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 13 00:27:38.383626 containerd[1470]: time="2025-09-13T00:27:38.383334087Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:38.388990 containerd[1470]: time="2025-09-13T00:27:38.387539981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:38.388990 containerd[1470]: time="2025-09-13T00:27:38.388456898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 594.537416ms" Sep 13 00:27:38.388990 containerd[1470]: time="2025-09-13T00:27:38.388495830Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:27:38.389998 containerd[1470]: time="2025-09-13T00:27:38.389731951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:27:39.086771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999020525.mount: Deactivated successfully. Sep 13 00:27:39.888486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:27:39.899473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:40.055347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:40.056225 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:40.117862 kubelet[2139]: E0913 00:27:40.117809 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:40.120798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:40.120958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:40.613714 sshd[2005]: Connection closed by authenticating user root 107.175.39.180 port 58798 [preauth] Sep 13 00:27:40.615876 systemd[1]: sshd@18-195.201.238.219:22-107.175.39.180:58798.service: Deactivated successfully. Sep 13 00:27:40.646727 containerd[1470]: time="2025-09-13T00:27:40.646587985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:40.649565 containerd[1470]: time="2025-09-13T00:27:40.649455051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 13 00:27:40.650362 containerd[1470]: time="2025-09-13T00:27:40.650259787Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:40.655764 containerd[1470]: time="2025-09-13T00:27:40.654895780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:40.656800 containerd[1470]: time="2025-09-13T00:27:40.656743632Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.266973238s" Sep 13 00:27:40.657702 containerd[1470]: time="2025-09-13T00:27:40.656798214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:27:40.784055 systemd[1]: Started sshd@19-195.201.238.219:22-107.175.39.180:51250.service - OpenSSH per-connection server daemon (107.175.39.180:51250). Sep 13 00:27:45.046837 systemd[1]: Started sshd@20-195.201.238.219:22-185.156.73.233:61494.service - OpenSSH per-connection server daemon (185.156.73.233:61494). Sep 13 00:27:45.679124 sshd[2158]: Connection closed by authenticating user root 107.175.39.180 port 51250 [preauth] Sep 13 00:27:45.681217 systemd[1]: sshd@19-195.201.238.219:22-107.175.39.180:51250.service: Deactivated successfully. Sep 13 00:27:45.954935 systemd[1]: Started sshd@21-195.201.238.219:22-107.175.39.180:39706.service - OpenSSH per-connection server daemon (107.175.39.180:39706). Sep 13 00:27:46.081175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:46.090180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:46.113443 sshd[2175]: Invalid user ubnt from 185.156.73.233 port 61494 Sep 13 00:27:46.131597 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-7.scope)... Sep 13 00:27:46.131622 systemd[1]: Reloading... Sep 13 00:27:46.167900 sshd[2175]: Connection closed by invalid user ubnt 185.156.73.233 port 61494 [preauth] Sep 13 00:27:46.276390 zram_generator::config[2232]: No configuration found. Sep 13 00:27:46.383793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:46.455569 systemd[1]: Reloading finished in 323 ms. Sep 13 00:27:46.488941 systemd[1]: sshd@20-195.201.238.219:22-185.156.73.233:61494.service: Deactivated successfully. Sep 13 00:27:46.511658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:46.515876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:46.522441 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:27:46.524180 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:46.533012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:46.661745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:46.675159 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:27:46.728155 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:46.728155 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:27:46.728155 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:46.728572 kubelet[2284]: I0913 00:27:46.728142 2284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:27:47.842505 kubelet[2284]: I0913 00:27:47.842121 2284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:27:47.842505 kubelet[2284]: I0913 00:27:47.842183 2284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:27:47.842996 kubelet[2284]: I0913 00:27:47.842831 2284 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:27:47.870479 kubelet[2284]: E0913 00:27:47.870135 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://195.201.238.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:47.870479 kubelet[2284]: I0913 00:27:47.870301 2284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:27:47.880428 kubelet[2284]: E0913 00:27:47.880370 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:27:47.880428 kubelet[2284]: I0913 00:27:47.880418 2284 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:27:47.884423 kubelet[2284]: I0913 00:27:47.884362 2284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:27:47.884693 kubelet[2284]: I0913 00:27:47.884652 2284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:27:47.884938 kubelet[2284]: I0913 00:27:47.884874 2284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:27:47.885097 kubelet[2284]: I0913 00:27:47.884914 2284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-9bb66b8eb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:27:47.885236 kubelet[2284]: I0913 00:27:47.885158 2284 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:27:47.885236 kubelet[2284]: I0913 00:27:47.885168 2284 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:27:47.885460 kubelet[2284]: I0913 00:27:47.885413 2284 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:47.889728 kubelet[2284]: I0913 00:27:47.888968 2284 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:27:47.889728 kubelet[2284]: I0913 00:27:47.889017 2284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:27:47.889728 kubelet[2284]: I0913 00:27:47.889155 2284 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:27:47.889728 kubelet[2284]: I0913 00:27:47.889237 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:27:47.895334 kubelet[2284]: W0913 00:27:47.895146 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://195.201.238.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-9bb66b8eb5&limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:47.895334 kubelet[2284]: E0913 00:27:47.895216 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://195.201.238.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-9bb66b8eb5&limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:47.899163 kubelet[2284]: I0913 00:27:47.897808 2284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:27:47.899163 kubelet[2284]: I0913 00:27:47.898872 2284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:27:47.899528 kubelet[2284]: W0913 00:27:47.899494 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:27:47.902890 kubelet[2284]: W0913 00:27:47.896878 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://195.201.238.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:47.903091 kubelet[2284]: E0913 00:27:47.903070 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://195.201.238.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:47.905246 kubelet[2284]: I0913 00:27:47.905203 2284 server.go:1274] "Started kubelet" Sep 13 00:27:47.907400 kubelet[2284]: I0913 00:27:47.907343 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:27:47.908035 kubelet[2284]: I0913 00:27:47.907995 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:27:47.909255 kubelet[2284]: I0913 00:27:47.909230 2284 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:27:47.910532 kubelet[2284]: I0913 00:27:47.910509 2284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:27:47.912830 kubelet[2284]: I0913 00:27:47.912767 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:27:47.913493 kubelet[2284]: E0913 00:27:47.911628 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://195.201.238.219:6443/api/v1/namespaces/default/events\": dial tcp 195.201.238.219:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-9bb66b8eb5.1864afffd4ed33c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-9bb66b8eb5,UID:ci-4081-3-5-n-9bb66b8eb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-9bb66b8eb5,},FirstTimestamp:2025-09-13 00:27:47.905172425 +0000 UTC m=+1.225615331,LastTimestamp:2025-09-13 00:27:47.905172425 +0000 UTC m=+1.225615331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-9bb66b8eb5,}" Sep 13 00:27:47.913950 kubelet[2284]: I0913 00:27:47.913921 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:27:47.918579 kubelet[2284]: E0913 00:27:47.917203 2284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:27:47.918579 kubelet[2284]: E0913 00:27:47.917511 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-9bb66b8eb5\" not found" Sep 13 00:27:47.918579 kubelet[2284]: I0913 00:27:47.917539 2284 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:27:47.918579 kubelet[2284]: I0913 00:27:47.917776 2284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:27:47.918579 kubelet[2284]: I0913 00:27:47.917893 2284 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:27:47.918579 kubelet[2284]: W0913 00:27:47.918335 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://195.201.238.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:47.918579 kubelet[2284]: E0913 00:27:47.918383 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://195.201.238.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:47.919957 kubelet[2284]: E0913 00:27:47.919909 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://195.201.238.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-9bb66b8eb5?timeout=10s\": dial tcp 195.201.238.219:6443: connect: connection refused" interval="200ms" Sep 13 00:27:47.920298 kubelet[2284]: I0913 00:27:47.920280 2284 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:27:47.920470 kubelet[2284]: I0913 00:27:47.920451 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:27:47.921910 kubelet[2284]: I0913 00:27:47.921831 2284 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:27:47.931093 kubelet[2284]: I0913 00:27:47.931024 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:27:47.932452 kubelet[2284]: I0913 00:27:47.932418 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:27:47.932452 kubelet[2284]: I0913 00:27:47.932451 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:27:47.932569 kubelet[2284]: I0913 00:27:47.932473 2284 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:27:47.932569 kubelet[2284]: E0913 00:27:47.932516 2284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:27:47.941084 kubelet[2284]: W0913 00:27:47.941024 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://195.201.238.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:47.941221 kubelet[2284]: E0913 00:27:47.941105 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://195.201.238.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:47.954696 kubelet[2284]: I0913 00:27:47.954499 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:27:47.954696 kubelet[2284]: I0913 00:27:47.954528 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:27:47.954696 kubelet[2284]: I0913 00:27:47.954557 2284 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:47.957490 kubelet[2284]: I0913 00:27:47.957446 2284 policy_none.go:49] "None policy: Start" Sep 13 00:27:47.958421 kubelet[2284]: I0913 00:27:47.958340 2284 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:27:47.958421 kubelet[2284]: I0913 00:27:47.958402 2284 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:27:47.965059 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:27:47.979405 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:27:47.984005 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:27:48.002706 kubelet[2284]: I0913 00:27:48.002644 2284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:27:48.003309 kubelet[2284]: I0913 00:27:48.003068 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:27:48.003309 kubelet[2284]: I0913 00:27:48.003104 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:27:48.003849 kubelet[2284]: I0913 00:27:48.003562 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:27:48.007803 kubelet[2284]: E0913 00:27:48.007594 2284 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-9bb66b8eb5\" not found" Sep 13 00:27:48.052651 systemd[1]: Created slice kubepods-burstable-pod1f0161c18093bd88057261cd831d0ec7.slice - libcontainer container kubepods-burstable-pod1f0161c18093bd88057261cd831d0ec7.slice. Sep 13 00:27:48.073542 systemd[1]: Created slice kubepods-burstable-podaf02a34c25eb0094c953d3bd1a0aeb30.slice - libcontainer container kubepods-burstable-podaf02a34c25eb0094c953d3bd1a0aeb30.slice. Sep 13 00:27:48.085073 systemd[1]: Created slice kubepods-burstable-podc5a75d69944e6a4cf13470709ffc2a68.slice - libcontainer container kubepods-burstable-podc5a75d69944e6a4cf13470709ffc2a68.slice. Sep 13 00:27:48.106558 kubelet[2284]: I0913 00:27:48.106321 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.107874 kubelet[2284]: E0913 00:27:48.107803 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://195.201.238.219:6443/api/v1/nodes\": dial tcp 195.201.238.219:6443: connect: connection refused" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.119997 kubelet[2284]: I0913 00:27:48.119708 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.119997 kubelet[2284]: I0913 00:27:48.119756 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.119997 kubelet[2284]: I0913 00:27:48.119780 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.119997 kubelet[2284]: I0913 00:27:48.119803 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.119997 kubelet[2284]: I0913 00:27:48.119824 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.120376 kubelet[2284]: I0913 00:27:48.119843 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.120376 kubelet[2284]: I0913 00:27:48.119862 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.120376 kubelet[2284]: I0913 00:27:48.119883 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af02a34c25eb0094c953d3bd1a0aeb30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"af02a34c25eb0094c953d3bd1a0aeb30\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.120376 kubelet[2284]: I0913 00:27:48.119904 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.121385 kubelet[2284]: E0913 00:27:48.121083 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://195.201.238.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-9bb66b8eb5?timeout=10s\": dial tcp 195.201.238.219:6443: connect: connection refused" interval="400ms" Sep 13 00:27:48.311247 kubelet[2284]: I0913 00:27:48.311193 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.312327 kubelet[2284]: E0913 00:27:48.312259 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://195.201.238.219:6443/api/v1/nodes\": dial tcp 195.201.238.219:6443: connect: connection refused" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.370604 containerd[1470]: time="2025-09-13T00:27:48.370423273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5,Uid:1f0161c18093bd88057261cd831d0ec7,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:48.382453 containerd[1470]: time="2025-09-13T00:27:48.380167207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-9bb66b8eb5,Uid:af02a34c25eb0094c953d3bd1a0aeb30,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:48.389343 containerd[1470]: time="2025-09-13T00:27:48.389235135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-9bb66b8eb5,Uid:c5a75d69944e6a4cf13470709ffc2a68,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:48.522288 kubelet[2284]: E0913 00:27:48.522214 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://195.201.238.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-9bb66b8eb5?timeout=10s\": dial tcp 195.201.238.219:6443: connect: connection refused" interval="800ms" Sep 13 00:27:48.716580 kubelet[2284]: I0913 00:27:48.716101 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.716580 kubelet[2284]: E0913 00:27:48.716486 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://195.201.238.219:6443/api/v1/nodes\": dial tcp 195.201.238.219:6443: connect: connection refused" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:48.970101 kubelet[2284]: W0913 00:27:48.969961 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://195.201.238.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:48.970722 kubelet[2284]: E0913 00:27:48.970663 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://195.201.238.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:48.984432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635992712.mount: Deactivated successfully. Sep 13 00:27:48.999002 containerd[1470]: time="2025-09-13T00:27:48.998744628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:49.003616 containerd[1470]: time="2025-09-13T00:27:49.003408624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:27:49.005774 containerd[1470]: time="2025-09-13T00:27:49.005701742Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:49.008378 containerd[1470]: time="2025-09-13T00:27:49.008333046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:49.010124 containerd[1470]: time="2025-09-13T00:27:49.009953878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 13 00:27:49.012699 containerd[1470]: time="2025-09-13T00:27:49.011807251Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:49.013977 containerd[1470]: time="2025-09-13T00:27:49.013929996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:27:49.017716 containerd[1470]: time="2025-09-13T00:27:49.017639066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:49.021101 containerd[1470]: time="2025-09-13T00:27:49.020788776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 640.526526ms" Sep 13 00:27:49.024692 containerd[1470]: time="2025-09-13T00:27:49.024627427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.994297ms" Sep 13 00:27:49.025876 containerd[1470]: time="2025-09-13T00:27:49.025787177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 655.227423ms" Sep 13 00:27:49.054486 kubelet[2284]: W0913 00:27:49.054437 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://195.201.238.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:49.054789 kubelet[2284]: E0913 00:27:49.054744 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://195.201.238.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:49.187755 containerd[1470]: time="2025-09-13T00:27:49.187570039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:49.188075 containerd[1470]: time="2025-09-13T00:27:49.187886487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:49.188450 containerd[1470]: time="2025-09-13T00:27:49.188061664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.188871 containerd[1470]: time="2025-09-13T00:27:49.188796040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.190663 containerd[1470]: time="2025-09-13T00:27:49.190567670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:49.191089 containerd[1470]: time="2025-09-13T00:27:49.190641128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:49.191089 containerd[1470]: time="2025-09-13T00:27:49.190657580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.191089 containerd[1470]: time="2025-09-13T00:27:49.190758900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.195157 containerd[1470]: time="2025-09-13T00:27:49.194726412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:49.195343 containerd[1470]: time="2025-09-13T00:27:49.195128367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:49.196195 containerd[1470]: time="2025-09-13T00:27:49.195329845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.196637 containerd[1470]: time="2025-09-13T00:27:49.196563333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:49.219928 systemd[1]: Started cri-containerd-f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e.scope - libcontainer container f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e. Sep 13 00:27:49.227246 systemd[1]: Started cri-containerd-a7cc9ea6d62fa3f098d105a6a0c610fcabf8b2aed182776f5390353f3e591239.scope - libcontainer container a7cc9ea6d62fa3f098d105a6a0c610fcabf8b2aed182776f5390353f3e591239. Sep 13 00:27:49.241284 systemd[1]: Started cri-containerd-e27e575fbabbc55cd3a875768f24519cf152581597aa93088b2a95c28aef66aa.scope - libcontainer container e27e575fbabbc55cd3a875768f24519cf152581597aa93088b2a95c28aef66aa. Sep 13 00:27:49.299569 containerd[1470]: time="2025-09-13T00:27:49.299531621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5,Uid:1f0161c18093bd88057261cd831d0ec7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e\"" Sep 13 00:27:49.308960 containerd[1470]: time="2025-09-13T00:27:49.308917343Z" level=info msg="CreateContainer within sandbox \"f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:27:49.324065 kubelet[2284]: E0913 00:27:49.323973 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://195.201.238.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-9bb66b8eb5?timeout=10s\": dial tcp 195.201.238.219:6443: connect: connection refused" interval="1.6s" Sep 13 00:27:49.324261 containerd[1470]: time="2025-09-13T00:27:49.324214341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-9bb66b8eb5,Uid:c5a75d69944e6a4cf13470709ffc2a68,Namespace:kube-system,Attempt:0,} returns sandbox id \"e27e575fbabbc55cd3a875768f24519cf152581597aa93088b2a95c28aef66aa\"" Sep 13 00:27:49.328735 containerd[1470]: time="2025-09-13T00:27:49.328691613Z" level=info msg="CreateContainer within sandbox \"e27e575fbabbc55cd3a875768f24519cf152581597aa93088b2a95c28aef66aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:27:49.328735 containerd[1470]: time="2025-09-13T00:27:49.328784326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-9bb66b8eb5,Uid:af02a34c25eb0094c953d3bd1a0aeb30,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7cc9ea6d62fa3f098d105a6a0c610fcabf8b2aed182776f5390353f3e591239\"" Sep 13 00:27:49.333518 containerd[1470]: time="2025-09-13T00:27:49.333267243Z" level=info msg="CreateContainer within sandbox \"a7cc9ea6d62fa3f098d105a6a0c610fcabf8b2aed182776f5390353f3e591239\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:27:49.343378 containerd[1470]: time="2025-09-13T00:27:49.343014208Z" level=info msg="CreateContainer within sandbox \"f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6\"" Sep 13 00:27:49.344652 containerd[1470]: time="2025-09-13T00:27:49.344589043Z" level=info msg="StartContainer for \"1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6\"" Sep 13 00:27:49.364424 containerd[1470]: time="2025-09-13T00:27:49.364370760Z" level=info msg="CreateContainer within sandbox \"e27e575fbabbc55cd3a875768f24519cf152581597aa93088b2a95c28aef66aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da9d43b4b73f7af0f4060b46aab32327091305a419f4a31f6c525ba95ee2a51a\"" Sep 13 00:27:49.366710 containerd[1470]: time="2025-09-13T00:27:49.365984666Z" level=info msg="StartContainer for \"da9d43b4b73f7af0f4060b46aab32327091305a419f4a31f6c525ba95ee2a51a\"" Sep 13 00:27:49.368079 kubelet[2284]: W0913 00:27:49.367997 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://195.201.238.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:49.368410 kubelet[2284]: E0913 00:27:49.368366 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://195.201.238.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:49.371033 containerd[1470]: time="2025-09-13T00:27:49.370983267Z" level=info msg="CreateContainer within sandbox \"a7cc9ea6d62fa3f098d105a6a0c610fcabf8b2aed182776f5390353f3e591239\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba\"" Sep 13 00:27:49.372341 kubelet[2284]: W0913 00:27:49.372232 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://195.201.238.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-9bb66b8eb5&limit=500&resourceVersion=0": dial tcp 195.201.238.219:6443: connect: connection refused Sep 13 00:27:49.372341 kubelet[2284]: E0913 00:27:49.372312 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://195.201.238.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-9bb66b8eb5&limit=500&resourceVersion=0\": dial tcp 195.201.238.219:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:49.374802 containerd[1470]: time="2025-09-13T00:27:49.373746074Z" level=info msg="StartContainer for \"47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba\"" Sep 13 00:27:49.383928 systemd[1]: Started cri-containerd-1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6.scope - libcontainer container 1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6. Sep 13 00:27:49.418920 systemd[1]: Started cri-containerd-da9d43b4b73f7af0f4060b46aab32327091305a419f4a31f6c525ba95ee2a51a.scope - libcontainer container da9d43b4b73f7af0f4060b46aab32327091305a419f4a31f6c525ba95ee2a51a. Sep 13 00:27:49.443934 systemd[1]: Started cri-containerd-47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba.scope - libcontainer container 47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba. Sep 13 00:27:49.455456 containerd[1470]: time="2025-09-13T00:27:49.455393037Z" level=info msg="StartContainer for \"1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6\" returns successfully" Sep 13 00:27:49.494301 containerd[1470]: time="2025-09-13T00:27:49.494155883Z" level=info msg="StartContainer for \"da9d43b4b73f7af0f4060b46aab32327091305a419f4a31f6c525ba95ee2a51a\" returns successfully" Sep 13 00:27:49.516917 containerd[1470]: time="2025-09-13T00:27:49.516847161Z" level=info msg="StartContainer for \"47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba\" returns successfully" Sep 13 00:27:49.521298 kubelet[2284]: I0913 00:27:49.521260 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:49.522389 kubelet[2284]: E0913 00:27:49.522335 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://195.201.238.219:6443/api/v1/nodes\": dial tcp 195.201.238.219:6443: connect: connection refused" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:50.515031 sshd[2180]: Connection closed by authenticating user root 107.175.39.180 port 39706 [preauth] Sep 13 00:27:50.516273 systemd[1]: sshd@21-195.201.238.219:22-107.175.39.180:39706.service: Deactivated successfully. Sep 13 00:27:50.705003 systemd[1]: Started sshd@22-195.201.238.219:22-107.175.39.180:39712.service - OpenSSH per-connection server daemon (107.175.39.180:39712). Sep 13 00:27:51.125421 kubelet[2284]: I0913 00:27:51.125385 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:52.365560 kubelet[2284]: E0913 00:27:52.365500 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-9bb66b8eb5\" not found" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:52.379487 kubelet[2284]: I0913 00:27:52.378647 2284 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:52.379487 kubelet[2284]: E0913 00:27:52.378740 2284 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-9bb66b8eb5\": node \"ci-4081-3-5-n-9bb66b8eb5\" not found" Sep 13 00:27:52.444932 kubelet[2284]: E0913 00:27:52.444791 2284 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-9bb66b8eb5.1864afffd4ed33c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-9bb66b8eb5,UID:ci-4081-3-5-n-9bb66b8eb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-9bb66b8eb5,},FirstTimestamp:2025-09-13 00:27:47.905172425 +0000 UTC m=+1.225615331,LastTimestamp:2025-09-13 00:27:47.905172425 +0000 UTC m=+1.225615331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-9bb66b8eb5,}" Sep 13 00:27:52.897220 kubelet[2284]: I0913 00:27:52.897128 2284 apiserver.go:52] "Watching apiserver" Sep 13 00:27:52.918734 kubelet[2284]: I0913 00:27:52.918632 2284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:27:54.633009 systemd[1]: Reloading requested from client PID 2568 ('systemctl') (unit session-7.scope)... Sep 13 00:27:54.633040 systemd[1]: Reloading... Sep 13 00:27:54.766722 zram_generator::config[2622]: No configuration found. Sep 13 00:27:54.865387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:54.958260 systemd[1]: Reloading finished in 324 ms. Sep 13 00:27:55.018700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:55.032849 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:27:55.033583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:55.033890 systemd[1]: kubelet.service: Consumed 1.710s CPU time, 127.9M memory peak, 0B memory swap peak. Sep 13 00:27:55.040175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:55.193986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:55.209139 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:27:55.292981 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:55.292981 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:27:55.292981 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:55.292981 kubelet[2655]: I0913 00:27:55.290726 2655 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:27:55.300209 kubelet[2655]: I0913 00:27:55.300137 2655 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:27:55.300209 kubelet[2655]: I0913 00:27:55.300180 2655 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:27:55.300539 kubelet[2655]: I0913 00:27:55.300501 2655 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:27:55.302340 kubelet[2655]: I0913 00:27:55.302311 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:27:55.305016 kubelet[2655]: I0913 00:27:55.304894 2655 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:27:55.309365 kubelet[2655]: E0913 00:27:55.309291 2655 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:27:55.309365 kubelet[2655]: I0913 00:27:55.309334 2655 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:27:55.312101 kubelet[2655]: I0913 00:27:55.312005 2655 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:27:55.313876 kubelet[2655]: I0913 00:27:55.313803 2655 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:27:55.314071 kubelet[2655]: I0913 00:27:55.313954 2655 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:27:55.314290 kubelet[2655]: I0913 00:27:55.313979 2655 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-9bb66b8eb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:27:55.314290 kubelet[2655]: I0913 00:27:55.314265 2655 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:27:55.314290 kubelet[2655]: I0913 00:27:55.314275 2655 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:27:55.314547 kubelet[2655]: I0913 00:27:55.314322 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:55.314547 kubelet[2655]: I0913 00:27:55.314427 2655 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:27:55.314547 kubelet[2655]: I0913 00:27:55.314438 2655 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:27:55.314547 kubelet[2655]: I0913 00:27:55.314463 2655 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:27:55.314547 kubelet[2655]: I0913 00:27:55.314472 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:27:55.323389 kubelet[2655]: I0913 00:27:55.323349 2655 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:27:55.327599 kubelet[2655]: I0913 00:27:55.324590 2655 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:27:55.327599 kubelet[2655]: I0913 00:27:55.325245 2655 server.go:1274] "Started kubelet" Sep 13 00:27:55.327599 kubelet[2655]: I0913 00:27:55.326053 2655 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:27:55.327599 kubelet[2655]: I0913 00:27:55.327054 2655 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:27:55.329013 kubelet[2655]: I0913 00:27:55.328952 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:27:55.330685 kubelet[2655]: I0913 00:27:55.329296 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:27:55.330911 kubelet[2655]: I0913 00:27:55.330880 2655 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:27:55.334599 kubelet[2655]: I0913 00:27:55.334548 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:27:55.340246 kubelet[2655]: I0913 00:27:55.336202 2655 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:27:55.340246 kubelet[2655]: E0913 00:27:55.336485 2655 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-9bb66b8eb5\" not found" Sep 13 00:27:55.340246 kubelet[2655]: I0913 00:27:55.338978 2655 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:27:55.340246 kubelet[2655]: I0913 00:27:55.339114 2655 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:27:55.343918 kubelet[2655]: I0913 00:27:55.343854 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:27:55.345989 kubelet[2655]: I0913 00:27:55.345947 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:27:55.345989 kubelet[2655]: I0913 00:27:55.345980 2655 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:27:55.346145 kubelet[2655]: I0913 00:27:55.346008 2655 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:27:55.346145 kubelet[2655]: E0913 00:27:55.346065 2655 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:27:55.377655 kubelet[2655]: I0913 00:27:55.375515 2655 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:27:55.377655 kubelet[2655]: I0913 00:27:55.375797 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:27:55.382551 kubelet[2655]: I0913 00:27:55.382510 2655 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:27:55.385320 kubelet[2655]: E0913 00:27:55.385276 2655 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:27:55.438922 kubelet[2655]: I0913 00:27:55.438887 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:27:55.439485 kubelet[2655]: I0913 00:27:55.439090 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:27:55.439485 kubelet[2655]: I0913 00:27:55.439136 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:55.439485 kubelet[2655]: I0913 00:27:55.439327 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:27:55.439485 kubelet[2655]: I0913 00:27:55.439339 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:27:55.439485 kubelet[2655]: I0913 00:27:55.439360 2655 policy_none.go:49] "None policy: Start" Sep 13 00:27:55.440397 kubelet[2655]: I0913 00:27:55.440346 2655 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:27:55.440397 kubelet[2655]: I0913 00:27:55.440385 2655 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:27:55.440617 kubelet[2655]: I0913 00:27:55.440563 2655 state_mem.go:75] "Updated machine memory state" Sep 13 00:27:55.446338 kubelet[2655]: I0913 00:27:55.445663 2655 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:27:55.446338 kubelet[2655]: I0913 00:27:55.446011 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:27:55.446338 kubelet[2655]: I0913 00:27:55.446097 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:27:55.446338 kubelet[2655]: E0913 00:27:55.446148 2655 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:27:55.450710 kubelet[2655]: I0913 00:27:55.450649 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:27:55.454311 sshd[2564]: Connection closed by authenticating user root 107.175.39.180 port 39712 [preauth] Sep 13 00:27:55.459434 systemd[1]: sshd@22-195.201.238.219:22-107.175.39.180:39712.service: Deactivated successfully. Sep 13 00:27:55.565860 kubelet[2655]: I0913 00:27:55.564585 2655 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.581764 kubelet[2655]: I0913 00:27:55.580398 2655 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.582083 kubelet[2655]: I0913 00:27:55.582061 2655 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.658902 kubelet[2655]: E0913 00:27:55.658869 2655 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-5-n-9bb66b8eb5\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.670036 systemd[1]: Started sshd@23-195.201.238.219:22-107.175.39.180:38846.service - OpenSSH per-connection server daemon (107.175.39.180:38846). Sep 13 00:27:55.742119 kubelet[2655]: I0913 00:27:55.741798 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742119 kubelet[2655]: I0913 00:27:55.741866 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af02a34c25eb0094c953d3bd1a0aeb30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"af02a34c25eb0094c953d3bd1a0aeb30\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742119 kubelet[2655]: I0913 00:27:55.741902 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742119 kubelet[2655]: I0913 00:27:55.741932 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742119 kubelet[2655]: I0913 00:27:55.741966 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5a75d69944e6a4cf13470709ffc2a68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"c5a75d69944e6a4cf13470709ffc2a68\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742350 kubelet[2655]: I0913 00:27:55.741996 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742350 kubelet[2655]: I0913 00:27:55.742027 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742350 kubelet[2655]: I0913 00:27:55.742074 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:55.742350 kubelet[2655]: I0913 00:27:55.742123 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f0161c18093bd88057261cd831d0ec7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5\" (UID: \"1f0161c18093bd88057261cd831d0ec7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:56.323710 kubelet[2655]: I0913 00:27:56.323605 2655 apiserver.go:52] "Watching apiserver" Sep 13 00:27:56.339547 kubelet[2655]: I0913 00:27:56.339489 2655 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:27:56.426703 kubelet[2655]: E0913 00:27:56.426428 2655 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-n-9bb66b8eb5\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:27:56.458258 kubelet[2655]: I0913 00:27:56.456815 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-9bb66b8eb5" podStartSLOduration=1.45677338 podStartE2EDuration="1.45677338s" podCreationTimestamp="2025-09-13 00:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:56.456478988 +0000 UTC m=+1.239510216" watchObservedRunningTime="2025-09-13 00:27:56.45677338 +0000 UTC m=+1.239804648" Sep 13 00:27:56.474897 kubelet[2655]: I0913 00:27:56.474015 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-9bb66b8eb5" podStartSLOduration=3.473995614 podStartE2EDuration="3.473995614s" podCreationTimestamp="2025-09-13 00:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:56.47391447 +0000 UTC m=+1.256945738" watchObservedRunningTime="2025-09-13 00:27:56.473995614 +0000 UTC m=+1.257026882" Sep 13 00:27:56.506553 kubelet[2655]: I0913 00:27:56.506474 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-9bb66b8eb5" podStartSLOduration=1.5064572410000001 podStartE2EDuration="1.506457241s" podCreationTimestamp="2025-09-13 00:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:56.487983662 +0000 UTC m=+1.271014930" watchObservedRunningTime="2025-09-13 00:27:56.506457241 +0000 UTC m=+1.289488509" Sep 13 00:27:59.749280 sshd[2690]: Connection closed by authenticating user root 107.175.39.180 port 38846 [preauth] Sep 13 00:27:59.752885 systemd[1]: sshd@23-195.201.238.219:22-107.175.39.180:38846.service: Deactivated successfully. Sep 13 00:27:59.893229 systemd[1]: Started sshd@24-195.201.238.219:22-107.175.39.180:38858.service - OpenSSH per-connection server daemon (107.175.39.180:38858). Sep 13 00:28:01.040701 kubelet[2655]: I0913 00:28:01.040009 2655 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:28:01.041167 containerd[1470]: time="2025-09-13T00:28:01.040499226Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:28:01.041355 kubelet[2655]: I0913 00:28:01.040833 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:28:01.869823 systemd[1]: Created slice kubepods-besteffort-pod57a7f4c8_cb92_4f10_be1f_c0e08776e85f.slice - libcontainer container kubepods-besteffort-pod57a7f4c8_cb92_4f10_be1f_c0e08776e85f.slice. Sep 13 00:28:01.892104 kubelet[2655]: I0913 00:28:01.891861 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-xtables-lock\") pod \"kube-proxy-t62lf\" (UID: \"57a7f4c8-cb92-4f10-be1f-c0e08776e85f\") " pod="kube-system/kube-proxy-t62lf" Sep 13 00:28:01.892104 kubelet[2655]: I0913 00:28:01.891927 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-lib-modules\") pod \"kube-proxy-t62lf\" (UID: \"57a7f4c8-cb92-4f10-be1f-c0e08776e85f\") " pod="kube-system/kube-proxy-t62lf" Sep 13 00:28:01.892104 kubelet[2655]: I0913 00:28:01.891961 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-kube-proxy\") pod \"kube-proxy-t62lf\" (UID: \"57a7f4c8-cb92-4f10-be1f-c0e08776e85f\") " pod="kube-system/kube-proxy-t62lf" Sep 13 00:28:01.892104 kubelet[2655]: I0913 00:28:01.891988 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jzj\" (UniqueName: \"kubernetes.io/projected/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-kube-api-access-k2jzj\") pod \"kube-proxy-t62lf\" (UID: \"57a7f4c8-cb92-4f10-be1f-c0e08776e85f\") " pod="kube-system/kube-proxy-t62lf" Sep 13 00:28:02.005731 kubelet[2655]: E0913 00:28:02.005664 2655 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:28:02.005731 kubelet[2655]: E0913 00:28:02.005731 2655 projected.go:194] Error preparing data for projected volume kube-api-access-k2jzj for pod kube-system/kube-proxy-t62lf: configmap "kube-root-ca.crt" not found Sep 13 00:28:02.005918 kubelet[2655]: E0913 00:28:02.005805 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-kube-api-access-k2jzj podName:57a7f4c8-cb92-4f10-be1f-c0e08776e85f nodeName:}" failed. No retries permitted until 2025-09-13 00:28:02.505782345 +0000 UTC m=+7.288813613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k2jzj" (UniqueName: "kubernetes.io/projected/57a7f4c8-cb92-4f10-be1f-c0e08776e85f-kube-api-access-k2jzj") pod "kube-proxy-t62lf" (UID: "57a7f4c8-cb92-4f10-be1f-c0e08776e85f") : configmap "kube-root-ca.crt" not found Sep 13 00:28:02.192531 systemd[1]: Created slice kubepods-besteffort-podb12a629d_ccf7_4c6e_ae83_1d5de9bf2e13.slice - libcontainer container kubepods-besteffort-podb12a629d_ccf7_4c6e_ae83_1d5de9bf2e13.slice. Sep 13 00:28:02.294444 kubelet[2655]: I0913 00:28:02.294259 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13-var-lib-calico\") pod \"tigera-operator-58fc44c59b-wwxp7\" (UID: \"b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13\") " pod="tigera-operator/tigera-operator-58fc44c59b-wwxp7" Sep 13 00:28:02.294444 kubelet[2655]: I0913 00:28:02.294314 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9cf\" (UniqueName: \"kubernetes.io/projected/b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13-kube-api-access-mk9cf\") pod \"tigera-operator-58fc44c59b-wwxp7\" (UID: \"b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13\") " pod="tigera-operator/tigera-operator-58fc44c59b-wwxp7" Sep 13 00:28:02.500031 containerd[1470]: time="2025-09-13T00:28:02.499432924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-wwxp7,Uid:b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:28:02.531491 containerd[1470]: time="2025-09-13T00:28:02.531243054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:02.531491 containerd[1470]: time="2025-09-13T00:28:02.531302008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:02.531491 containerd[1470]: time="2025-09-13T00:28:02.531313374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:02.531491 containerd[1470]: time="2025-09-13T00:28:02.531410750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:02.567096 systemd[1]: Started cri-containerd-85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647.scope - libcontainer container 85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647. Sep 13 00:28:02.610610 containerd[1470]: time="2025-09-13T00:28:02.610560720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-wwxp7,Uid:b12a629d-ccf7-4c6e-ae83-1d5de9bf2e13,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647\"" Sep 13 00:28:02.613929 containerd[1470]: time="2025-09-13T00:28:02.613885668Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:28:02.781791 containerd[1470]: time="2025-09-13T00:28:02.781351747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t62lf,Uid:57a7f4c8-cb92-4f10-be1f-c0e08776e85f,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:02.786739 sshd[2704]: Connection closed by authenticating user root 107.175.39.180 port 38858 [preauth] Sep 13 00:28:02.792021 systemd[1]: sshd@24-195.201.238.219:22-107.175.39.180:38858.service: Deactivated successfully. Sep 13 00:28:02.813500 containerd[1470]: time="2025-09-13T00:28:02.813036765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:02.813500 containerd[1470]: time="2025-09-13T00:28:02.813125816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:02.813500 containerd[1470]: time="2025-09-13T00:28:02.813136863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:02.813500 containerd[1470]: time="2025-09-13T00:28:02.813240602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:02.833978 systemd[1]: Started cri-containerd-278582664011793ea5debaef3288cebf477a3f639cfb67e1cd1e2f7be04f207b.scope - libcontainer container 278582664011793ea5debaef3288cebf477a3f639cfb67e1cd1e2f7be04f207b. Sep 13 00:28:02.862209 containerd[1470]: time="2025-09-13T00:28:02.861960874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t62lf,Uid:57a7f4c8-cb92-4f10-be1f-c0e08776e85f,Namespace:kube-system,Attempt:0,} returns sandbox id \"278582664011793ea5debaef3288cebf477a3f639cfb67e1cd1e2f7be04f207b\"" Sep 13 00:28:02.866472 containerd[1470]: time="2025-09-13T00:28:02.866423234Z" level=info msg="CreateContainer within sandbox \"278582664011793ea5debaef3288cebf477a3f639cfb67e1cd1e2f7be04f207b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:28:02.886542 containerd[1470]: time="2025-09-13T00:28:02.886190055Z" level=info msg="CreateContainer within sandbox \"278582664011793ea5debaef3288cebf477a3f639cfb67e1cd1e2f7be04f207b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b295f5a1dc6847370de3347924e9dd0df2ad9ca47ec859d3efa03a6f4a94a970\"" Sep 13 00:28:02.887889 containerd[1470]: time="2025-09-13T00:28:02.887471430Z" level=info msg="StartContainer for \"b295f5a1dc6847370de3347924e9dd0df2ad9ca47ec859d3efa03a6f4a94a970\"" Sep 13 00:28:02.918915 systemd[1]: Started cri-containerd-b295f5a1dc6847370de3347924e9dd0df2ad9ca47ec859d3efa03a6f4a94a970.scope - libcontainer container b295f5a1dc6847370de3347924e9dd0df2ad9ca47ec859d3efa03a6f4a94a970. Sep 13 00:28:02.967738 containerd[1470]: time="2025-09-13T00:28:02.967642786Z" level=info msg="StartContainer for \"b295f5a1dc6847370de3347924e9dd0df2ad9ca47ec859d3efa03a6f4a94a970\" returns successfully" Sep 13 00:28:02.975002 systemd[1]: Started sshd@25-195.201.238.219:22-107.175.39.180:38884.service - OpenSSH per-connection server daemon (107.175.39.180:38884). Sep 13 00:28:04.342534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523830083.mount: Deactivated successfully. Sep 13 00:28:04.711866 sshd[2823]: Connection closed by authenticating user root 107.175.39.180 port 38884 [preauth] Sep 13 00:28:04.716214 systemd[1]: sshd@25-195.201.238.219:22-107.175.39.180:38884.service: Deactivated successfully. Sep 13 00:28:04.853712 containerd[1470]: time="2025-09-13T00:28:04.853287682Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:04.856415 containerd[1470]: time="2025-09-13T00:28:04.855968474Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:28:04.858366 containerd[1470]: time="2025-09-13T00:28:04.858267547Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:04.865721 containerd[1470]: time="2025-09-13T00:28:04.864549688Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:04.866578 containerd[1470]: time="2025-09-13T00:28:04.866527035Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.25259226s" Sep 13 00:28:04.866578 containerd[1470]: time="2025-09-13T00:28:04.866577101Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:28:04.871048 containerd[1470]: time="2025-09-13T00:28:04.870025451Z" level=info msg="CreateContainer within sandbox \"85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:28:04.886792 systemd[1]: Started sshd@26-195.201.238.219:22-107.175.39.180:38888.service - OpenSSH per-connection server daemon (107.175.39.180:38888). Sep 13 00:28:04.891294 kubelet[2655]: I0913 00:28:04.891153 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t62lf" podStartSLOduration=3.891131287 podStartE2EDuration="3.891131287s" podCreationTimestamp="2025-09-13 00:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:03.447688545 +0000 UTC m=+8.230719813" watchObservedRunningTime="2025-09-13 00:28:04.891131287 +0000 UTC m=+9.674162555" Sep 13 00:28:04.901933 containerd[1470]: time="2025-09-13T00:28:04.901804107Z" level=info msg="CreateContainer within sandbox \"85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c\"" Sep 13 00:28:04.903771 containerd[1470]: time="2025-09-13T00:28:04.902855453Z" level=info msg="StartContainer for \"d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c\"" Sep 13 00:28:04.955059 systemd[1]: Started cri-containerd-d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c.scope - libcontainer container d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c. Sep 13 00:28:04.998094 containerd[1470]: time="2025-09-13T00:28:04.997792854Z" level=info msg="StartContainer for \"d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c\" returns successfully" Sep 13 00:28:05.483369 kubelet[2655]: I0913 00:28:05.483179 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-wwxp7" podStartSLOduration=1.228051525 podStartE2EDuration="3.483137704s" podCreationTimestamp="2025-09-13 00:28:02 +0000 UTC" firstStartedPulling="2025-09-13 00:28:02.612724922 +0000 UTC m=+7.395756190" lastFinishedPulling="2025-09-13 00:28:04.867811101 +0000 UTC m=+9.650842369" observedRunningTime="2025-09-13 00:28:05.461151276 +0000 UTC m=+10.244182584" watchObservedRunningTime="2025-09-13 00:28:05.483137704 +0000 UTC m=+10.266168972" Sep 13 00:28:06.493872 sshd[2966]: Connection closed by authenticating user root 107.175.39.180 port 38888 [preauth] Sep 13 00:28:06.496088 systemd[1]: sshd@26-195.201.238.219:22-107.175.39.180:38888.service: Deactivated successfully. Sep 13 00:28:06.738406 systemd[1]: Started sshd@27-195.201.238.219:22-107.175.39.180:53496.service - OpenSSH per-connection server daemon (107.175.39.180:53496). Sep 13 00:28:11.624166 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:11.786044 sshd[1753]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:11.794093 systemd[1]: sshd@16-195.201.238.219:22-147.75.109.163:42422.service: Deactivated successfully. Sep 13 00:28:11.794277 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:28:11.797790 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:28:11.799872 systemd[1]: session-7.scope: Consumed 7.280s CPU time, 150.3M memory peak, 0B memory swap peak. Sep 13 00:28:11.801913 systemd-logind[1449]: Removed session 7. Sep 13 00:28:12.530360 sshd[3010]: Connection closed by authenticating user root 107.175.39.180 port 53496 [preauth] Sep 13 00:28:12.533496 systemd[1]: sshd@27-195.201.238.219:22-107.175.39.180:53496.service: Deactivated successfully. Sep 13 00:28:12.718893 systemd[1]: Started sshd@28-195.201.238.219:22-107.175.39.180:53504.service - OpenSSH per-connection server daemon (107.175.39.180:53504). Sep 13 00:28:16.379804 sshd[3059]: Connection closed by authenticating user root 107.175.39.180 port 53504 [preauth] Sep 13 00:28:16.382668 systemd[1]: sshd@28-195.201.238.219:22-107.175.39.180:53504.service: Deactivated successfully. Sep 13 00:28:16.557127 systemd[1]: Started sshd@29-195.201.238.219:22-107.175.39.180:38974.service - OpenSSH per-connection server daemon (107.175.39.180:38974). Sep 13 00:28:18.956120 systemd[1]: Started sshd@30-195.201.238.219:22-104.248.235.219:6103.service - OpenSSH per-connection server daemon (104.248.235.219:6103). Sep 13 00:28:19.757046 systemd[1]: Created slice kubepods-besteffort-podbe8f95ce_15df_4d7c_a7cc_ba2a44236a95.slice - libcontainer container kubepods-besteffort-podbe8f95ce_15df_4d7c_a7cc_ba2a44236a95.slice. Sep 13 00:28:19.909712 kubelet[2655]: I0913 00:28:19.908499 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be8f95ce-15df-4d7c-a7cc-ba2a44236a95-tigera-ca-bundle\") pod \"calico-typha-6cd796d767-88q9m\" (UID: \"be8f95ce-15df-4d7c-a7cc-ba2a44236a95\") " pod="calico-system/calico-typha-6cd796d767-88q9m" Sep 13 00:28:19.909712 kubelet[2655]: I0913 00:28:19.908567 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/be8f95ce-15df-4d7c-a7cc-ba2a44236a95-typha-certs\") pod \"calico-typha-6cd796d767-88q9m\" (UID: \"be8f95ce-15df-4d7c-a7cc-ba2a44236a95\") " pod="calico-system/calico-typha-6cd796d767-88q9m" Sep 13 00:28:19.909712 kubelet[2655]: I0913 00:28:19.908598 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgth\" (UniqueName: \"kubernetes.io/projected/be8f95ce-15df-4d7c-a7cc-ba2a44236a95-kube-api-access-sxgth\") pod \"calico-typha-6cd796d767-88q9m\" (UID: \"be8f95ce-15df-4d7c-a7cc-ba2a44236a95\") " pod="calico-system/calico-typha-6cd796d767-88q9m" Sep 13 00:28:20.050375 systemd[1]: Created slice kubepods-besteffort-pod8592accd_d560_498d_82d4_afce24311aa7.slice - libcontainer container kubepods-besteffort-pod8592accd_d560_498d_82d4_afce24311aa7.slice. Sep 13 00:28:20.211208 kubelet[2655]: I0913 00:28:20.210795 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-xtables-lock\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211208 kubelet[2655]: I0913 00:28:20.210850 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-cni-bin-dir\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211208 kubelet[2655]: I0913 00:28:20.210870 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-cni-net-dir\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211208 kubelet[2655]: I0913 00:28:20.210885 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-var-run-calico\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211208 kubelet[2655]: I0913 00:28:20.210904 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-var-lib-calico\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211596 kubelet[2655]: I0913 00:28:20.211025 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8592accd-d560-498d-82d4-afce24311aa7-node-certs\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211596 kubelet[2655]: I0913 00:28:20.211070 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjt28\" (UniqueName: \"kubernetes.io/projected/8592accd-d560-498d-82d4-afce24311aa7-kube-api-access-vjt28\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211596 kubelet[2655]: I0913 00:28:20.211091 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-policysync\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211596 kubelet[2655]: I0913 00:28:20.211107 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-cni-log-dir\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211596 kubelet[2655]: I0913 00:28:20.211124 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8592accd-d560-498d-82d4-afce24311aa7-tigera-ca-bundle\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211774 kubelet[2655]: I0913 00:28:20.211142 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-flexvol-driver-host\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.211774 kubelet[2655]: I0913 00:28:20.211158 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8592accd-d560-498d-82d4-afce24311aa7-lib-modules\") pod \"calico-node-z7s2x\" (UID: \"8592accd-d560-498d-82d4-afce24311aa7\") " pod="calico-system/calico-node-z7s2x" Sep 13 00:28:20.243270 kubelet[2655]: E0913 00:28:20.242652 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:20.318697 kubelet[2655]: E0913 00:28:20.316422 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.318697 kubelet[2655]: W0913 00:28:20.316473 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.318697 kubelet[2655]: E0913 00:28:20.316501 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.320775 kubelet[2655]: E0913 00:28:20.320048 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.321076 kubelet[2655]: W0913 00:28:20.321048 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.321824 kubelet[2655]: E0913 00:28:20.321793 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.324846 kubelet[2655]: E0913 00:28:20.324811 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.325026 kubelet[2655]: W0913 00:28:20.325009 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.325101 kubelet[2655]: E0913 00:28:20.325084 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.325451 kubelet[2655]: E0913 00:28:20.325433 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.325543 kubelet[2655]: W0913 00:28:20.325530 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.326898 kubelet[2655]: E0913 00:28:20.326864 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.328215 kubelet[2655]: E0913 00:28:20.327373 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.329137 kubelet[2655]: W0913 00:28:20.329099 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.329401 kubelet[2655]: E0913 00:28:20.329343 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.329824 kubelet[2655]: E0913 00:28:20.329766 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.329824 kubelet[2655]: W0913 00:28:20.329817 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.329920 kubelet[2655]: E0913 00:28:20.329836 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.330100 kubelet[2655]: E0913 00:28:20.330072 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.330100 kubelet[2655]: W0913 00:28:20.330088 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.330100 kubelet[2655]: E0913 00:28:20.330098 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.330313 kubelet[2655]: E0913 00:28:20.330277 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.330358 kubelet[2655]: W0913 00:28:20.330290 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.330358 kubelet[2655]: E0913 00:28:20.330339 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.330601 kubelet[2655]: E0913 00:28:20.330576 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.330601 kubelet[2655]: W0913 00:28:20.330596 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.331708 kubelet[2655]: E0913 00:28:20.330608 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.331708 kubelet[2655]: E0913 00:28:20.331394 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.331708 kubelet[2655]: W0913 00:28:20.331421 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.331708 kubelet[2655]: E0913 00:28:20.331438 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.332975 kubelet[2655]: E0913 00:28:20.332916 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.332975 kubelet[2655]: W0913 00:28:20.332958 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.333091 kubelet[2655]: E0913 00:28:20.332987 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.333317 kubelet[2655]: E0913 00:28:20.333284 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.333317 kubelet[2655]: W0913 00:28:20.333313 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.333390 kubelet[2655]: E0913 00:28:20.333325 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.333556 kubelet[2655]: E0913 00:28:20.333531 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.333556 kubelet[2655]: W0913 00:28:20.333545 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.333556 kubelet[2655]: E0913 00:28:20.333555 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.333896 kubelet[2655]: E0913 00:28:20.333865 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.334741 kubelet[2655]: W0913 00:28:20.334707 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.334741 kubelet[2655]: E0913 00:28:20.334737 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.335076 kubelet[2655]: E0913 00:28:20.335055 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.335076 kubelet[2655]: W0913 00:28:20.335071 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.335140 kubelet[2655]: E0913 00:28:20.335083 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.335322 kubelet[2655]: E0913 00:28:20.335286 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.335362 kubelet[2655]: W0913 00:28:20.335317 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.335362 kubelet[2655]: E0913 00:28:20.335339 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.335591 kubelet[2655]: E0913 00:28:20.335572 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.335591 kubelet[2655]: W0913 00:28:20.335586 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.335656 kubelet[2655]: E0913 00:28:20.335595 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.336029 kubelet[2655]: E0913 00:28:20.336004 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.336029 kubelet[2655]: W0913 00:28:20.336020 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.336029 kubelet[2655]: E0913 00:28:20.336034 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.337033 kubelet[2655]: E0913 00:28:20.337005 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.337033 kubelet[2655]: W0913 00:28:20.337024 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.337033 kubelet[2655]: E0913 00:28:20.337038 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.337257 kubelet[2655]: E0913 00:28:20.337236 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.337257 kubelet[2655]: W0913 00:28:20.337249 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.337257 kubelet[2655]: E0913 00:28:20.337258 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.338721 kubelet[2655]: E0913 00:28:20.338157 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.338721 kubelet[2655]: W0913 00:28:20.338176 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.338721 kubelet[2655]: E0913 00:28:20.338188 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.338721 kubelet[2655]: E0913 00:28:20.338510 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.338721 kubelet[2655]: W0913 00:28:20.338520 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.338721 kubelet[2655]: E0913 00:28:20.338534 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.353644 kubelet[2655]: E0913 00:28:20.353119 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.355940 kubelet[2655]: W0913 00:28:20.353148 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.355940 kubelet[2655]: E0913 00:28:20.353855 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.359506 containerd[1470]: time="2025-09-13T00:28:20.358014129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7s2x,Uid:8592accd-d560-498d-82d4-afce24311aa7,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:20.365366 containerd[1470]: time="2025-09-13T00:28:20.364598266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd796d767-88q9m,Uid:be8f95ce-15df-4d7c-a7cc-ba2a44236a95,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:20.414908 kubelet[2655]: E0913 00:28:20.414861 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.414908 kubelet[2655]: W0913 00:28:20.414891 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.415114 kubelet[2655]: E0913 00:28:20.414925 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.415114 kubelet[2655]: I0913 00:28:20.414957 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrs5\" (UniqueName: \"kubernetes.io/projected/b312368b-a0fb-41a9-8fcf-b787f4bcfe2e-kube-api-access-5zrs5\") pod \"csi-node-driver-mvd2b\" (UID: \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\") " pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:20.417099 kubelet[2655]: E0913 00:28:20.417039 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.417244 kubelet[2655]: W0913 00:28:20.417179 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.418727 kubelet[2655]: E0913 00:28:20.418352 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.419065 kubelet[2655]: E0913 00:28:20.418873 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.419065 kubelet[2655]: W0913 00:28:20.418902 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.419065 kubelet[2655]: E0913 00:28:20.418921 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.420137 kubelet[2655]: I0913 00:28:20.420087 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b312368b-a0fb-41a9-8fcf-b787f4bcfe2e-varrun\") pod \"csi-node-driver-mvd2b\" (UID: \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\") " pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:20.420489 containerd[1470]: time="2025-09-13T00:28:20.418661723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:20.420626 containerd[1470]: time="2025-09-13T00:28:20.420470600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:20.421070 containerd[1470]: time="2025-09-13T00:28:20.420612198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:20.421147 kubelet[2655]: E0913 00:28:20.420593 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.421147 kubelet[2655]: W0913 00:28:20.421000 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.421292 kubelet[2655]: E0913 00:28:20.421035 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.423144 kubelet[2655]: E0913 00:28:20.422996 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.423144 kubelet[2655]: W0913 00:28:20.423130 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.423349 kubelet[2655]: E0913 00:28:20.423166 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.423933 containerd[1470]: time="2025-09-13T00:28:20.423082369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:20.425054 kubelet[2655]: E0913 00:28:20.424941 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.425054 kubelet[2655]: W0913 00:28:20.424967 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.425054 kubelet[2655]: E0913 00:28:20.425021 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.425687 kubelet[2655]: E0913 00:28:20.425502 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.425687 kubelet[2655]: W0913 00:28:20.425521 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.425687 kubelet[2655]: E0913 00:28:20.425537 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.425820 kubelet[2655]: I0913 00:28:20.425721 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b312368b-a0fb-41a9-8fcf-b787f4bcfe2e-socket-dir\") pod \"csi-node-driver-mvd2b\" (UID: \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\") " pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:20.426064 kubelet[2655]: E0913 00:28:20.425901 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.426064 kubelet[2655]: W0913 00:28:20.425920 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.426064 kubelet[2655]: E0913 00:28:20.425948 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.426396 kubelet[2655]: E0913 00:28:20.426144 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.426396 kubelet[2655]: W0913 00:28:20.426153 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.426396 kubelet[2655]: E0913 00:28:20.426177 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.426396 kubelet[2655]: E0913 00:28:20.426446 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.426396 kubelet[2655]: W0913 00:28:20.426456 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.426396 kubelet[2655]: E0913 00:28:20.426466 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.426396 kubelet[2655]: I0913 00:28:20.426495 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b312368b-a0fb-41a9-8fcf-b787f4bcfe2e-kubelet-dir\") pod \"csi-node-driver-mvd2b\" (UID: \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\") " pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:20.426796 kubelet[2655]: E0913 00:28:20.426712 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.426796 kubelet[2655]: W0913 00:28:20.426725 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.426796 kubelet[2655]: E0913 00:28:20.426744 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.426796 kubelet[2655]: I0913 00:28:20.426762 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b312368b-a0fb-41a9-8fcf-b787f4bcfe2e-registration-dir\") pod \"csi-node-driver-mvd2b\" (UID: \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\") " pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:20.427301 kubelet[2655]: E0913 00:28:20.426968 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.427301 kubelet[2655]: W0913 00:28:20.426988 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.427301 kubelet[2655]: E0913 00:28:20.427000 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.427301 kubelet[2655]: E0913 00:28:20.427245 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.427301 kubelet[2655]: W0913 00:28:20.427256 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.427301 kubelet[2655]: E0913 00:28:20.427271 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.429771 kubelet[2655]: E0913 00:28:20.429023 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.429771 kubelet[2655]: W0913 00:28:20.429051 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.429771 kubelet[2655]: E0913 00:28:20.429073 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.429771 kubelet[2655]: E0913 00:28:20.429271 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.432454 kubelet[2655]: W0913 00:28:20.429280 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.432454 kubelet[2655]: E0913 00:28:20.431766 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.433765 containerd[1470]: time="2025-09-13T00:28:20.433547169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:20.433765 containerd[1470]: time="2025-09-13T00:28:20.433617427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:20.433765 containerd[1470]: time="2025-09-13T00:28:20.433647155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:20.434061 containerd[1470]: time="2025-09-13T00:28:20.433764226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:20.466022 systemd[1]: Started cri-containerd-5b6a0ef70af8cd3501eb0813c1d5048266ab64f4b91b264ff1ecaa1240498850.scope - libcontainer container 5b6a0ef70af8cd3501eb0813c1d5048266ab64f4b91b264ff1ecaa1240498850. Sep 13 00:28:20.485667 systemd[1]: Started cri-containerd-18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17.scope - libcontainer container 18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17. Sep 13 00:28:20.527816 kubelet[2655]: E0913 00:28:20.527770 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.527816 kubelet[2655]: W0913 00:28:20.527798 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.528003 kubelet[2655]: E0913 00:28:20.527830 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.528280 kubelet[2655]: E0913 00:28:20.528250 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.528280 kubelet[2655]: W0913 00:28:20.528272 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.528420 kubelet[2655]: E0913 00:28:20.528399 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.529703 kubelet[2655]: E0913 00:28:20.529615 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.529703 kubelet[2655]: W0913 00:28:20.529640 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.529854 kubelet[2655]: E0913 00:28:20.529713 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.531493 kubelet[2655]: E0913 00:28:20.531394 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.531493 kubelet[2655]: W0913 00:28:20.531481 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.531654 kubelet[2655]: E0913 00:28:20.531519 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.532150 kubelet[2655]: E0913 00:28:20.531903 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.532150 kubelet[2655]: W0913 00:28:20.531924 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.532150 kubelet[2655]: E0913 00:28:20.532028 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.533607 kubelet[2655]: E0913 00:28:20.533434 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.533607 kubelet[2655]: W0913 00:28:20.533462 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.533607 kubelet[2655]: E0913 00:28:20.533496 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.533916 kubelet[2655]: E0913 00:28:20.533814 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.533916 kubelet[2655]: W0913 00:28:20.533827 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.533966 kubelet[2655]: E0913 00:28:20.533916 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.534249 kubelet[2655]: E0913 00:28:20.534032 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.534249 kubelet[2655]: W0913 00:28:20.534046 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.534249 kubelet[2655]: E0913 00:28:20.534126 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.534381 kubelet[2655]: E0913 00:28:20.534266 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.534381 kubelet[2655]: W0913 00:28:20.534276 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.534587 kubelet[2655]: E0913 00:28:20.534497 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.535150 kubelet[2655]: E0913 00:28:20.534970 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.535150 kubelet[2655]: W0913 00:28:20.534997 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.535150 kubelet[2655]: E0913 00:28:20.535139 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.535789 kubelet[2655]: E0913 00:28:20.535702 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.535789 kubelet[2655]: W0913 00:28:20.535724 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.535959 kubelet[2655]: E0913 00:28:20.535881 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.536992 kubelet[2655]: E0913 00:28:20.536940 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.536992 kubelet[2655]: W0913 00:28:20.536963 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.537176 kubelet[2655]: E0913 00:28:20.537150 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.538655 kubelet[2655]: E0913 00:28:20.538602 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.538655 kubelet[2655]: W0913 00:28:20.538650 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.539093 kubelet[2655]: E0913 00:28:20.538918 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.539093 kubelet[2655]: W0913 00:28:20.538934 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.539093 kubelet[2655]: E0913 00:28:20.539088 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.539203 kubelet[2655]: E0913 00:28:20.539162 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.539228 kubelet[2655]: E0913 00:28:20.539218 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.539228 kubelet[2655]: W0913 00:28:20.539225 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.539544 kubelet[2655]: E0913 00:28:20.539313 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.539605 kubelet[2655]: E0913 00:28:20.539579 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.539605 kubelet[2655]: W0913 00:28:20.539589 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.539886 kubelet[2655]: E0913 00:28:20.539854 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.540257 kubelet[2655]: E0913 00:28:20.540222 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.540257 kubelet[2655]: W0913 00:28:20.540242 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.540257 kubelet[2655]: E0913 00:28:20.540259 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.540949 kubelet[2655]: E0913 00:28:20.540590 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.540949 kubelet[2655]: W0913 00:28:20.540607 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.543921 kubelet[2655]: E0913 00:28:20.543865 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.544322 kubelet[2655]: E0913 00:28:20.544158 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.544322 kubelet[2655]: W0913 00:28:20.544179 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.544322 kubelet[2655]: E0913 00:28:20.544285 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.544489 kubelet[2655]: E0913 00:28:20.544462 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.544489 kubelet[2655]: W0913 00:28:20.544472 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.544686 kubelet[2655]: E0913 00:28:20.544553 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.544906 kubelet[2655]: E0913 00:28:20.544884 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.544906 kubelet[2655]: W0913 00:28:20.544901 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.544988 kubelet[2655]: E0913 00:28:20.544918 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.545289 kubelet[2655]: E0913 00:28:20.545263 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.545289 kubelet[2655]: W0913 00:28:20.545279 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.545289 kubelet[2655]: E0913 00:28:20.545293 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.545836 kubelet[2655]: E0913 00:28:20.545547 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.545836 kubelet[2655]: W0913 00:28:20.545563 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.545836 kubelet[2655]: E0913 00:28:20.545580 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.546138 kubelet[2655]: E0913 00:28:20.546078 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.546138 kubelet[2655]: W0913 00:28:20.546099 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.546138 kubelet[2655]: E0913 00:28:20.546122 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.546890 sshd[3071]: kex_protocol_error: type 20 seq 2 [preauth] Sep 13 00:28:20.546890 sshd[3071]: kex_protocol_error: type 30 seq 3 [preauth] Sep 13 00:28:20.547249 kubelet[2655]: E0913 00:28:20.547091 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.547249 kubelet[2655]: W0913 00:28:20.547106 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.547249 kubelet[2655]: E0913 00:28:20.547120 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.587420 containerd[1470]: time="2025-09-13T00:28:20.587186086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7s2x,Uid:8592accd-d560-498d-82d4-afce24311aa7,Namespace:calico-system,Attempt:0,} returns sandbox id \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\"" Sep 13 00:28:20.591818 containerd[1470]: time="2025-09-13T00:28:20.591527591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:28:20.604652 kubelet[2655]: E0913 00:28:20.604613 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:28:20.604652 kubelet[2655]: W0913 00:28:20.604642 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:28:20.605460 kubelet[2655]: E0913 00:28:20.604668 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:28:20.654561 containerd[1470]: time="2025-09-13T00:28:20.654392770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd796d767-88q9m,Uid:be8f95ce-15df-4d7c-a7cc-ba2a44236a95,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b6a0ef70af8cd3501eb0813c1d5048266ab64f4b91b264ff1ecaa1240498850\"" Sep 13 00:28:20.809131 sshd[3068]: Connection closed by authenticating user root 107.175.39.180 port 38974 [preauth] Sep 13 00:28:20.813631 systemd[1]: sshd@29-195.201.238.219:22-107.175.39.180:38974.service: Deactivated successfully. Sep 13 00:28:20.911924 sshd[3071]: kex_protocol_error: type 20 seq 4 [preauth] Sep 13 00:28:20.911924 sshd[3071]: kex_protocol_error: type 30 seq 5 [preauth] Sep 13 00:28:21.021885 systemd[1]: Started sshd@31-195.201.238.219:22-107.175.39.180:38986.service - OpenSSH per-connection server daemon (107.175.39.180:38986). Sep 13 00:28:22.099787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350597536.mount: Deactivated successfully. Sep 13 00:28:22.189502 containerd[1470]: time="2025-09-13T00:28:22.189359574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:22.191249 containerd[1470]: time="2025-09-13T00:28:22.191189505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 13 00:28:22.195190 containerd[1470]: time="2025-09-13T00:28:22.193785985Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:22.197578 containerd[1470]: time="2025-09-13T00:28:22.197498421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:22.198996 containerd[1470]: time="2025-09-13T00:28:22.198927093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.607342847s" Sep 13 00:28:22.198996 containerd[1470]: time="2025-09-13T00:28:22.198986748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:28:22.201138 containerd[1470]: time="2025-09-13T00:28:22.201096868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:28:22.203359 containerd[1470]: time="2025-09-13T00:28:22.203310214Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:28:22.228053 containerd[1470]: time="2025-09-13T00:28:22.227997942Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30\"" Sep 13 00:28:22.230457 containerd[1470]: time="2025-09-13T00:28:22.230378970Z" level=info msg="StartContainer for \"211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30\"" Sep 13 00:28:22.290998 systemd[1]: Started cri-containerd-211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30.scope - libcontainer container 211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30. Sep 13 00:28:22.327501 containerd[1470]: time="2025-09-13T00:28:22.326429137Z" level=info msg="StartContainer for \"211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30\" returns successfully" Sep 13 00:28:22.347044 kubelet[2655]: E0913 00:28:22.346971 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:22.352817 systemd[1]: cri-containerd-211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30.scope: Deactivated successfully. Sep 13 00:28:22.447302 containerd[1470]: time="2025-09-13T00:28:22.447221886Z" level=info msg="shim disconnected" id=211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30 namespace=k8s.io Sep 13 00:28:22.447919 containerd[1470]: time="2025-09-13T00:28:22.447614423Z" level=warning msg="cleaning up after shim disconnected" id=211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30 namespace=k8s.io Sep 13 00:28:22.447919 containerd[1470]: time="2025-09-13T00:28:22.447639789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:28:22.900015 sshd[3071]: kex_protocol_error: type 20 seq 6 [preauth] Sep 13 00:28:22.900015 sshd[3071]: kex_protocol_error: type 30 seq 7 [preauth] Sep 13 00:28:23.036609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-211e28ede237cf454e44695d3229c0d1b87d0b59ebf3bea8be3d4c3c619a2d30-rootfs.mount: Deactivated successfully. Sep 13 00:28:24.244347 containerd[1470]: time="2025-09-13T00:28:24.244183182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:24.245703 containerd[1470]: time="2025-09-13T00:28:24.245624396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 13 00:28:24.247065 containerd[1470]: time="2025-09-13T00:28:24.246993953Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:24.250342 containerd[1470]: time="2025-09-13T00:28:24.250287476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:24.251601 containerd[1470]: time="2025-09-13T00:28:24.251378769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.049807063s" Sep 13 00:28:24.251601 containerd[1470]: time="2025-09-13T00:28:24.251418178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:28:24.254052 containerd[1470]: time="2025-09-13T00:28:24.253815813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:28:24.284104 containerd[1470]: time="2025-09-13T00:28:24.284053255Z" level=info msg="CreateContainer within sandbox \"5b6a0ef70af8cd3501eb0813c1d5048266ab64f4b91b264ff1ecaa1240498850\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:28:24.299951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489768704.mount: Deactivated successfully. Sep 13 00:28:24.304725 containerd[1470]: time="2025-09-13T00:28:24.304429614Z" level=info msg="CreateContainer within sandbox \"5b6a0ef70af8cd3501eb0813c1d5048266ab64f4b91b264ff1ecaa1240498850\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9187cf209b173d6b00e14cb3f26dadd36348b0ffec62134e8d13fdb21155913f\"" Sep 13 00:28:24.307746 containerd[1470]: time="2025-09-13T00:28:24.306747231Z" level=info msg="StartContainer for \"9187cf209b173d6b00e14cb3f26dadd36348b0ffec62134e8d13fdb21155913f\"" Sep 13 00:28:24.348989 kubelet[2655]: E0913 00:28:24.346967 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:24.347228 systemd[1]: Started cri-containerd-9187cf209b173d6b00e14cb3f26dadd36348b0ffec62134e8d13fdb21155913f.scope - libcontainer container 9187cf209b173d6b00e14cb3f26dadd36348b0ffec62134e8d13fdb21155913f. Sep 13 00:28:24.401085 containerd[1470]: time="2025-09-13T00:28:24.400925481Z" level=info msg="StartContainer for \"9187cf209b173d6b00e14cb3f26dadd36348b0ffec62134e8d13fdb21155913f\" returns successfully" Sep 13 00:28:24.518958 kubelet[2655]: I0913 00:28:24.518209 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cd796d767-88q9m" podStartSLOduration=1.924203038 podStartE2EDuration="5.518190557s" podCreationTimestamp="2025-09-13 00:28:19 +0000 UTC" firstStartedPulling="2025-09-13 00:28:20.658920564 +0000 UTC m=+25.441951832" lastFinishedPulling="2025-09-13 00:28:24.252908043 +0000 UTC m=+29.035939351" observedRunningTime="2025-09-13 00:28:24.518077291 +0000 UTC m=+29.301108639" watchObservedRunningTime="2025-09-13 00:28:24.518190557 +0000 UTC m=+29.301221825" Sep 13 00:28:25.272496 sshd[3237]: Connection closed by authenticating user root 107.175.39.180 port 38986 [preauth] Sep 13 00:28:25.275752 systemd[1]: sshd@31-195.201.238.219:22-107.175.39.180:38986.service: Deactivated successfully. Sep 13 00:28:25.460306 systemd[1]: Started sshd@32-195.201.238.219:22-107.175.39.180:41956.service - OpenSSH per-connection server daemon (107.175.39.180:41956). Sep 13 00:28:26.346578 kubelet[2655]: E0913 00:28:26.346530 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:26.471165 containerd[1470]: time="2025-09-13T00:28:26.471101229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:26.472434 containerd[1470]: time="2025-09-13T00:28:26.472391351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:28:26.474302 containerd[1470]: time="2025-09-13T00:28:26.473853430Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:26.477000 containerd[1470]: time="2025-09-13T00:28:26.476944345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:26.478057 containerd[1470]: time="2025-09-13T00:28:26.478016299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.224158037s" Sep 13 00:28:26.478057 containerd[1470]: time="2025-09-13T00:28:26.478056108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:28:26.483170 containerd[1470]: time="2025-09-13T00:28:26.483119854Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:28:26.506659 containerd[1470]: time="2025-09-13T00:28:26.506562213Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da\"" Sep 13 00:28:26.512695 containerd[1470]: time="2025-09-13T00:28:26.509729104Z" level=info msg="StartContainer for \"0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da\"" Sep 13 00:28:26.555963 systemd[1]: Started cri-containerd-0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da.scope - libcontainer container 0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da. Sep 13 00:28:26.590390 containerd[1470]: time="2025-09-13T00:28:26.590275133Z" level=info msg="StartContainer for \"0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da\" returns successfully" Sep 13 00:28:27.088329 containerd[1470]: time="2025-09-13T00:28:27.088268673Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:28:27.096095 kubelet[2655]: I0913 00:28:27.094667 2655 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:28:27.096493 systemd[1]: cri-containerd-0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da.scope: Deactivated successfully. Sep 13 00:28:27.136995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da-rootfs.mount: Deactivated successfully. Sep 13 00:28:27.160581 systemd[1]: Created slice kubepods-burstable-poda568f78d_444c_478c_ac80_6c89593927f2.slice - libcontainer container kubepods-burstable-poda568f78d_444c_478c_ac80_6c89593927f2.slice. Sep 13 00:28:27.176395 systemd[1]: Created slice kubepods-besteffort-pod3df007d3_21cb_4a21_97dd_dba0e3686044.slice - libcontainer container kubepods-besteffort-pod3df007d3_21cb_4a21_97dd_dba0e3686044.slice. Sep 13 00:28:27.202793 systemd[1]: Created slice kubepods-burstable-podd9a34000_b87c_4a54_a63c_fdd33a922040.slice - libcontainer container kubepods-burstable-podd9a34000_b87c_4a54_a63c_fdd33a922040.slice. Sep 13 00:28:27.226624 systemd[1]: Created slice kubepods-besteffort-pod10ee540c_0324_4579_8d1e_34d8475f5cac.slice - libcontainer container kubepods-besteffort-pod10ee540c_0324_4579_8d1e_34d8475f5cac.slice. Sep 13 00:28:27.243738 systemd[1]: Created slice kubepods-besteffort-podc520c01d_eab5_4613_a97d_e129d2838442.slice - libcontainer container kubepods-besteffort-podc520c01d_eab5_4613_a97d_e129d2838442.slice. Sep 13 00:28:27.257075 systemd[1]: Created slice kubepods-besteffort-podde7829ee_942c_4eb2_8399_15d4e08c4967.slice - libcontainer container kubepods-besteffort-podde7829ee_942c_4eb2_8399_15d4e08c4967.slice. Sep 13 00:28:27.268006 systemd[1]: Created slice kubepods-besteffort-podef73e584_e419_4d03_b7f7_96ac3d16f498.slice - libcontainer container kubepods-besteffort-podef73e584_e419_4d03_b7f7_96ac3d16f498.slice. Sep 13 00:28:27.281643 containerd[1470]: time="2025-09-13T00:28:27.281395687Z" level=info msg="shim disconnected" id=0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da namespace=k8s.io Sep 13 00:28:27.281643 containerd[1470]: time="2025-09-13T00:28:27.281460541Z" level=warning msg="cleaning up after shim disconnected" id=0fc06e7f610bdcfb73e8f3f5a3f4f76f2b398c0576d142b7e08eefea6e76d8da namespace=k8s.io Sep 13 00:28:27.281643 containerd[1470]: time="2025-09-13T00:28:27.281469463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:28:27.297410 kubelet[2655]: I0913 00:28:27.296597 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ktx\" (UniqueName: \"kubernetes.io/projected/de7829ee-942c-4eb2-8399-15d4e08c4967-kube-api-access-h7ktx\") pod \"whisker-65757d5db6-hv4sh\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " pod="calico-system/whisker-65757d5db6-hv4sh" Sep 13 00:28:27.297410 kubelet[2655]: I0913 00:28:27.296802 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10ee540c-0324-4579-8d1e-34d8475f5cac-goldmane-ca-bundle\") pod \"goldmane-7988f88666-65946\" (UID: \"10ee540c-0324-4579-8d1e-34d8475f5cac\") " pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.297410 kubelet[2655]: I0913 00:28:27.296828 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrn6l\" (UniqueName: \"kubernetes.io/projected/c520c01d-eab5-4613-a97d-e129d2838442-kube-api-access-wrn6l\") pod \"calico-apiserver-7bd7f85854-x7mgg\" (UID: \"c520c01d-eab5-4613-a97d-e129d2838442\") " pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" Sep 13 00:28:27.297410 kubelet[2655]: I0913 00:28:27.296867 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a34000-b87c-4a54-a63c-fdd33a922040-config-volume\") pod \"coredns-7c65d6cfc9-g9w6r\" (UID: \"d9a34000-b87c-4a54-a63c-fdd33a922040\") " pod="kube-system/coredns-7c65d6cfc9-g9w6r" Sep 13 00:28:27.297410 kubelet[2655]: I0913 00:28:27.296889 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10ee540c-0324-4579-8d1e-34d8475f5cac-config\") pod \"goldmane-7988f88666-65946\" (UID: \"10ee540c-0324-4579-8d1e-34d8475f5cac\") " pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.297933 kubelet[2655]: I0913 00:28:27.296930 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-backend-key-pair\") pod \"whisker-65757d5db6-hv4sh\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " pod="calico-system/whisker-65757d5db6-hv4sh" Sep 13 00:28:27.297933 kubelet[2655]: I0913 00:28:27.296949 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsgl\" (UniqueName: \"kubernetes.io/projected/a568f78d-444c-478c-ac80-6c89593927f2-kube-api-access-bwsgl\") pod \"coredns-7c65d6cfc9-2wk28\" (UID: \"a568f78d-444c-478c-ac80-6c89593927f2\") " pod="kube-system/coredns-7c65d6cfc9-2wk28" Sep 13 00:28:27.297933 kubelet[2655]: I0913 00:28:27.296975 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-ca-bundle\") pod \"whisker-65757d5db6-hv4sh\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " pod="calico-system/whisker-65757d5db6-hv4sh" Sep 13 00:28:27.297933 kubelet[2655]: I0913 00:28:27.296992 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhc9\" (UniqueName: \"kubernetes.io/projected/d9a34000-b87c-4a54-a63c-fdd33a922040-kube-api-access-6hhc9\") pod \"coredns-7c65d6cfc9-g9w6r\" (UID: \"d9a34000-b87c-4a54-a63c-fdd33a922040\") " pod="kube-system/coredns-7c65d6cfc9-g9w6r" Sep 13 00:28:27.297933 kubelet[2655]: I0913 00:28:27.297009 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcdv\" (UniqueName: \"kubernetes.io/projected/10ee540c-0324-4579-8d1e-34d8475f5cac-kube-api-access-mkcdv\") pod \"goldmane-7988f88666-65946\" (UID: \"10ee540c-0324-4579-8d1e-34d8475f5cac\") " pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.298260 kubelet[2655]: I0913 00:28:27.297031 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c520c01d-eab5-4613-a97d-e129d2838442-calico-apiserver-certs\") pod \"calico-apiserver-7bd7f85854-x7mgg\" (UID: \"c520c01d-eab5-4613-a97d-e129d2838442\") " pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" Sep 13 00:28:27.298260 kubelet[2655]: I0913 00:28:27.297050 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a568f78d-444c-478c-ac80-6c89593927f2-config-volume\") pod \"coredns-7c65d6cfc9-2wk28\" (UID: \"a568f78d-444c-478c-ac80-6c89593927f2\") " pod="kube-system/coredns-7c65d6cfc9-2wk28" Sep 13 00:28:27.298260 kubelet[2655]: I0913 00:28:27.297064 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/10ee540c-0324-4579-8d1e-34d8475f5cac-goldmane-key-pair\") pod \"goldmane-7988f88666-65946\" (UID: \"10ee540c-0324-4579-8d1e-34d8475f5cac\") " pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.298260 kubelet[2655]: I0913 00:28:27.297083 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df007d3-21cb-4a21-97dd-dba0e3686044-tigera-ca-bundle\") pod \"calico-kube-controllers-9498d6585-s89kj\" (UID: \"3df007d3-21cb-4a21-97dd-dba0e3686044\") " pod="calico-system/calico-kube-controllers-9498d6585-s89kj" Sep 13 00:28:27.298260 kubelet[2655]: I0913 00:28:27.297104 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjsb\" (UniqueName: \"kubernetes.io/projected/3df007d3-21cb-4a21-97dd-dba0e3686044-kube-api-access-qbjsb\") pod \"calico-kube-controllers-9498d6585-s89kj\" (UID: \"3df007d3-21cb-4a21-97dd-dba0e3686044\") " pod="calico-system/calico-kube-controllers-9498d6585-s89kj" Sep 13 00:28:27.398731 kubelet[2655]: I0913 00:28:27.397798 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2t2\" (UniqueName: \"kubernetes.io/projected/ef73e584-e419-4d03-b7f7-96ac3d16f498-kube-api-access-lf2t2\") pod \"calico-apiserver-7bd7f85854-msr2v\" (UID: \"ef73e584-e419-4d03-b7f7-96ac3d16f498\") " pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" Sep 13 00:28:27.398731 kubelet[2655]: I0913 00:28:27.398032 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef73e584-e419-4d03-b7f7-96ac3d16f498-calico-apiserver-certs\") pod \"calico-apiserver-7bd7f85854-msr2v\" (UID: \"ef73e584-e419-4d03-b7f7-96ac3d16f498\") " pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" Sep 13 00:28:27.471055 containerd[1470]: time="2025-09-13T00:28:27.471012317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wk28,Uid:a568f78d-444c-478c-ac80-6c89593927f2,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:27.489179 containerd[1470]: time="2025-09-13T00:28:27.489120162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9498d6585-s89kj,Uid:3df007d3-21cb-4a21-97dd-dba0e3686044,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:27.532372 containerd[1470]: time="2025-09-13T00:28:27.531545772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g9w6r,Uid:d9a34000-b87c-4a54-a63c-fdd33a922040,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:27.539777 containerd[1470]: time="2025-09-13T00:28:27.539630809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-65946,Uid:10ee540c-0324-4579-8d1e-34d8475f5cac,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:27.557473 containerd[1470]: time="2025-09-13T00:28:27.557325167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:28:27.558642 containerd[1470]: time="2025-09-13T00:28:27.557868642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-x7mgg,Uid:c520c01d-eab5-4613-a97d-e129d2838442,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:28:27.573542 containerd[1470]: time="2025-09-13T00:28:27.573486319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65757d5db6-hv4sh,Uid:de7829ee-942c-4eb2-8399-15d4e08c4967,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:27.583814 containerd[1470]: time="2025-09-13T00:28:27.583745458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-msr2v,Uid:ef73e584-e419-4d03-b7f7-96ac3d16f498,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:28:27.713860 containerd[1470]: time="2025-09-13T00:28:27.713402353Z" level=error msg="Failed to destroy network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.714281 containerd[1470]: time="2025-09-13T00:28:27.714061533Z" level=error msg="encountered an error cleaning up failed sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.714649 containerd[1470]: time="2025-09-13T00:28:27.714121466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wk28,Uid:a568f78d-444c-478c-ac80-6c89593927f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.716006 kubelet[2655]: E0913 00:28:27.714982 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.716006 kubelet[2655]: E0913 00:28:27.715077 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2wk28" Sep 13 00:28:27.716006 kubelet[2655]: E0913 00:28:27.715098 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2wk28" Sep 13 00:28:27.716191 kubelet[2655]: E0913 00:28:27.715146 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2wk28_kube-system(a568f78d-444c-478c-ac80-6c89593927f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2wk28_kube-system(a568f78d-444c-478c-ac80-6c89593927f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2wk28" podUID="a568f78d-444c-478c-ac80-6c89593927f2" Sep 13 00:28:27.806106 containerd[1470]: time="2025-09-13T00:28:27.806042948Z" level=error msg="Failed to destroy network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.806519 containerd[1470]: time="2025-09-13T00:28:27.806455995Z" level=error msg="encountered an error cleaning up failed sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.806568 containerd[1470]: time="2025-09-13T00:28:27.806524210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9498d6585-s89kj,Uid:3df007d3-21cb-4a21-97dd-dba0e3686044,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.806919 kubelet[2655]: E0913 00:28:27.806776 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.806919 kubelet[2655]: E0913 00:28:27.806842 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9498d6585-s89kj" Sep 13 00:28:27.806919 kubelet[2655]: E0913 00:28:27.806862 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9498d6585-s89kj" Sep 13 00:28:27.807855 kubelet[2655]: E0913 00:28:27.807777 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9498d6585-s89kj_calico-system(3df007d3-21cb-4a21-97dd-dba0e3686044)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9498d6585-s89kj_calico-system(3df007d3-21cb-4a21-97dd-dba0e3686044)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9498d6585-s89kj" podUID="3df007d3-21cb-4a21-97dd-dba0e3686044" Sep 13 00:28:27.819614 containerd[1470]: time="2025-09-13T00:28:27.819401945Z" level=error msg="Failed to destroy network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.819822 containerd[1470]: time="2025-09-13T00:28:27.819786106Z" level=error msg="encountered an error cleaning up failed sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.819869 containerd[1470]: time="2025-09-13T00:28:27.819850040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65757d5db6-hv4sh,Uid:de7829ee-942c-4eb2-8399-15d4e08c4967,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.820208 kubelet[2655]: E0913 00:28:27.820150 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.820304 kubelet[2655]: E0913 00:28:27.820222 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65757d5db6-hv4sh" Sep 13 00:28:27.820304 kubelet[2655]: E0913 00:28:27.820243 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65757d5db6-hv4sh" Sep 13 00:28:27.820456 kubelet[2655]: E0913 00:28:27.820291 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65757d5db6-hv4sh_calico-system(de7829ee-942c-4eb2-8399-15d4e08c4967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65757d5db6-hv4sh_calico-system(de7829ee-942c-4eb2-8399-15d4e08c4967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65757d5db6-hv4sh" podUID="de7829ee-942c-4eb2-8399-15d4e08c4967" Sep 13 00:28:27.848643 containerd[1470]: time="2025-09-13T00:28:27.848580141Z" level=error msg="Failed to destroy network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.849391 containerd[1470]: time="2025-09-13T00:28:27.849271528Z" level=error msg="encountered an error cleaning up failed sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.849391 containerd[1470]: time="2025-09-13T00:28:27.849345224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g9w6r,Uid:d9a34000-b87c-4a54-a63c-fdd33a922040,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.850129 kubelet[2655]: E0913 00:28:27.849737 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.850129 kubelet[2655]: E0913 00:28:27.849805 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g9w6r" Sep 13 00:28:27.850129 kubelet[2655]: E0913 00:28:27.849825 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g9w6r" Sep 13 00:28:27.850294 kubelet[2655]: E0913 00:28:27.849871 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g9w6r_kube-system(d9a34000-b87c-4a54-a63c-fdd33a922040)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g9w6r_kube-system(d9a34000-b87c-4a54-a63c-fdd33a922040)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g9w6r" podUID="d9a34000-b87c-4a54-a63c-fdd33a922040" Sep 13 00:28:27.857271 containerd[1470]: time="2025-09-13T00:28:27.857212174Z" level=error msg="Failed to destroy network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.858947 containerd[1470]: time="2025-09-13T00:28:27.858799832Z" level=error msg="encountered an error cleaning up failed sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.858947 containerd[1470]: time="2025-09-13T00:28:27.858879969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-65946,Uid:10ee540c-0324-4579-8d1e-34d8475f5cac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.860454 kubelet[2655]: E0913 00:28:27.859183 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.860454 kubelet[2655]: E0913 00:28:27.859241 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.860454 kubelet[2655]: E0913 00:28:27.859264 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-65946" Sep 13 00:28:27.860569 kubelet[2655]: E0913 00:28:27.859315 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-65946_calico-system(10ee540c-0324-4579-8d1e-34d8475f5cac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-65946_calico-system(10ee540c-0324-4579-8d1e-34d8475f5cac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-65946" podUID="10ee540c-0324-4579-8d1e-34d8475f5cac" Sep 13 00:28:27.882430 containerd[1470]: time="2025-09-13T00:28:27.882067373Z" level=error msg="Failed to destroy network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.883057 containerd[1470]: time="2025-09-13T00:28:27.882740036Z" level=error msg="encountered an error cleaning up failed sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.883057 containerd[1470]: time="2025-09-13T00:28:27.882838417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-x7mgg,Uid:c520c01d-eab5-4613-a97d-e129d2838442,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.883825 kubelet[2655]: E0913 00:28:27.883146 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.883825 kubelet[2655]: E0913 00:28:27.883207 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" Sep 13 00:28:27.883825 kubelet[2655]: E0913 00:28:27.883232 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" Sep 13 00:28:27.883924 kubelet[2655]: E0913 00:28:27.883290 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd7f85854-x7mgg_calico-apiserver(c520c01d-eab5-4613-a97d-e129d2838442)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd7f85854-x7mgg_calico-apiserver(c520c01d-eab5-4613-a97d-e129d2838442)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" podUID="c520c01d-eab5-4613-a97d-e129d2838442" Sep 13 00:28:27.890783 containerd[1470]: time="2025-09-13T00:28:27.890715930Z" level=error msg="Failed to destroy network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.891314 containerd[1470]: time="2025-09-13T00:28:27.891271768Z" level=error msg="encountered an error cleaning up failed sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.891407 containerd[1470]: time="2025-09-13T00:28:27.891333061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-msr2v,Uid:ef73e584-e419-4d03-b7f7-96ac3d16f498,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.891999 kubelet[2655]: E0913 00:28:27.891782 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:27.891999 kubelet[2655]: E0913 00:28:27.891843 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" Sep 13 00:28:27.891999 kubelet[2655]: E0913 00:28:27.891861 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" Sep 13 00:28:27.892169 kubelet[2655]: E0913 00:28:27.891907 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd7f85854-msr2v_calico-apiserver(ef73e584-e419-4d03-b7f7-96ac3d16f498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd7f85854-msr2v_calico-apiserver(ef73e584-e419-4d03-b7f7-96ac3d16f498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" podUID="ef73e584-e419-4d03-b7f7-96ac3d16f498" Sep 13 00:28:28.355282 systemd[1]: Created slice kubepods-besteffort-podb312368b_a0fb_41a9_8fcf_b787f4bcfe2e.slice - libcontainer container kubepods-besteffort-podb312368b_a0fb_41a9_8fcf_b787f4bcfe2e.slice. Sep 13 00:28:28.358839 containerd[1470]: time="2025-09-13T00:28:28.358796044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvd2b,Uid:b312368b-a0fb-41a9-8fcf-b787f4bcfe2e,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:28.422379 containerd[1470]: time="2025-09-13T00:28:28.421293766Z" level=error msg="Failed to destroy network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.422379 containerd[1470]: time="2025-09-13T00:28:28.421603550Z" level=error msg="encountered an error cleaning up failed sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.422379 containerd[1470]: time="2025-09-13T00:28:28.421648399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvd2b,Uid:b312368b-a0fb-41a9-8fcf-b787f4bcfe2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.422557 kubelet[2655]: E0913 00:28:28.422006 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.422557 kubelet[2655]: E0913 00:28:28.422061 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:28.422557 kubelet[2655]: E0913 00:28:28.422092 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvd2b" Sep 13 00:28:28.422975 kubelet[2655]: E0913 00:28:28.422153 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvd2b_calico-system(b312368b-a0fb-41a9-8fcf-b787f4bcfe2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvd2b_calico-system(b312368b-a0fb-41a9-8fcf-b787f4bcfe2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:28.501938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649-shm.mount: Deactivated successfully. Sep 13 00:28:28.502079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f-shm.mount: Deactivated successfully. Sep 13 00:28:28.537581 kubelet[2655]: I0913 00:28:28.536818 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:28.538255 containerd[1470]: time="2025-09-13T00:28:28.538206898Z" level=info msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" Sep 13 00:28:28.538594 containerd[1470]: time="2025-09-13T00:28:28.538401658Z" level=info msg="Ensure that sandbox 8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2 in task-service has been cleanup successfully" Sep 13 00:28:28.540878 kubelet[2655]: I0913 00:28:28.540347 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:28.543344 containerd[1470]: time="2025-09-13T00:28:28.542491623Z" level=info msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" Sep 13 00:28:28.543504 kubelet[2655]: I0913 00:28:28.542814 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:28.543830 containerd[1470]: time="2025-09-13T00:28:28.543789692Z" level=info msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" Sep 13 00:28:28.544951 containerd[1470]: time="2025-09-13T00:28:28.544835588Z" level=info msg="Ensure that sandbox 15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f in task-service has been cleanup successfully" Sep 13 00:28:28.544951 containerd[1470]: time="2025-09-13T00:28:28.544911044Z" level=info msg="Ensure that sandbox e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c in task-service has been cleanup successfully" Sep 13 00:28:28.553139 kubelet[2655]: I0913 00:28:28.552938 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:28.557152 containerd[1470]: time="2025-09-13T00:28:28.555036737Z" level=info msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" Sep 13 00:28:28.557152 containerd[1470]: time="2025-09-13T00:28:28.555539001Z" level=info msg="Ensure that sandbox 346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb in task-service has been cleanup successfully" Sep 13 00:28:28.561084 kubelet[2655]: I0913 00:28:28.560981 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:28.563882 containerd[1470]: time="2025-09-13T00:28:28.562519644Z" level=info msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" Sep 13 00:28:28.563882 containerd[1470]: time="2025-09-13T00:28:28.563645037Z" level=info msg="Ensure that sandbox 20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a in task-service has been cleanup successfully" Sep 13 00:28:28.569492 kubelet[2655]: I0913 00:28:28.569449 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:28.571739 containerd[1470]: time="2025-09-13T00:28:28.570850807Z" level=info msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" Sep 13 00:28:28.571739 containerd[1470]: time="2025-09-13T00:28:28.571248769Z" level=info msg="Ensure that sandbox bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4 in task-service has been cleanup successfully" Sep 13 00:28:28.598401 kubelet[2655]: I0913 00:28:28.597513 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:28.600807 containerd[1470]: time="2025-09-13T00:28:28.600486854Z" level=info msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" Sep 13 00:28:28.602536 containerd[1470]: time="2025-09-13T00:28:28.602151038Z" level=info msg="Ensure that sandbox 41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d in task-service has been cleanup successfully" Sep 13 00:28:28.603456 containerd[1470]: time="2025-09-13T00:28:28.600924545Z" level=error msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" failed" error="failed to destroy network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.606058 kubelet[2655]: E0913 00:28:28.605916 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:28.606058 kubelet[2655]: E0913 00:28:28.605985 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2"} Sep 13 00:28:28.606757 kubelet[2655]: E0913 00:28:28.606043 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.606885 kubelet[2655]: E0913 00:28:28.606772 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvd2b" podUID="b312368b-a0fb-41a9-8fcf-b787f4bcfe2e" Sep 13 00:28:28.609132 kubelet[2655]: I0913 00:28:28.609068 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:28.610344 containerd[1470]: time="2025-09-13T00:28:28.610262955Z" level=info msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" Sep 13 00:28:28.611661 containerd[1470]: time="2025-09-13T00:28:28.611545300Z" level=info msg="Ensure that sandbox bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649 in task-service has been cleanup successfully" Sep 13 00:28:28.696702 containerd[1470]: time="2025-09-13T00:28:28.695511461Z" level=error msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" failed" error="failed to destroy network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.697158 kubelet[2655]: E0913 00:28:28.696098 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:28.697158 kubelet[2655]: E0913 00:28:28.696179 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a"} Sep 13 00:28:28.697158 kubelet[2655]: E0913 00:28:28.696215 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c520c01d-eab5-4613-a97d-e129d2838442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.697158 kubelet[2655]: E0913 00:28:28.696239 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c520c01d-eab5-4613-a97d-e129d2838442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" podUID="c520c01d-eab5-4613-a97d-e129d2838442" Sep 13 00:28:28.705856 containerd[1470]: time="2025-09-13T00:28:28.705309006Z" level=error msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" failed" error="failed to destroy network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.706017 kubelet[2655]: E0913 00:28:28.705567 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:28.706017 kubelet[2655]: E0913 00:28:28.705620 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb"} Sep 13 00:28:28.706017 kubelet[2655]: E0913 00:28:28.705656 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de7829ee-942c-4eb2-8399-15d4e08c4967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.706017 kubelet[2655]: E0913 00:28:28.705695 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de7829ee-942c-4eb2-8399-15d4e08c4967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65757d5db6-hv4sh" podUID="de7829ee-942c-4eb2-8399-15d4e08c4967" Sep 13 00:28:28.712729 containerd[1470]: time="2025-09-13T00:28:28.712635801Z" level=error msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" failed" error="failed to destroy network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.713485 kubelet[2655]: E0913 00:28:28.713431 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:28.714262 kubelet[2655]: E0913 00:28:28.713662 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c"} Sep 13 00:28:28.714262 kubelet[2655]: E0913 00:28:28.714194 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef73e584-e419-4d03-b7f7-96ac3d16f498\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.714262 kubelet[2655]: E0913 00:28:28.714222 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef73e584-e419-4d03-b7f7-96ac3d16f498\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" podUID="ef73e584-e419-4d03-b7f7-96ac3d16f498" Sep 13 00:28:28.720908 containerd[1470]: time="2025-09-13T00:28:28.720852180Z" level=error msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" failed" error="failed to destroy network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.721431 kubelet[2655]: E0913 00:28:28.721277 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:28.721431 kubelet[2655]: E0913 00:28:28.721325 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f"} Sep 13 00:28:28.721431 kubelet[2655]: E0913 00:28:28.721360 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a568f78d-444c-478c-ac80-6c89593927f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.721431 kubelet[2655]: E0913 00:28:28.721384 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a568f78d-444c-478c-ac80-6c89593927f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2wk28" podUID="a568f78d-444c-478c-ac80-6c89593927f2" Sep 13 00:28:28.724110 containerd[1470]: time="2025-09-13T00:28:28.724053202Z" level=error msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" failed" error="failed to destroy network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.724704 kubelet[2655]: E0913 00:28:28.724377 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:28.724704 kubelet[2655]: E0913 00:28:28.724427 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4"} Sep 13 00:28:28.724704 kubelet[2655]: E0913 00:28:28.724465 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10ee540c-0324-4579-8d1e-34d8475f5cac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.724704 kubelet[2655]: E0913 00:28:28.724488 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10ee540c-0324-4579-8d1e-34d8475f5cac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-65946" podUID="10ee540c-0324-4579-8d1e-34d8475f5cac" Sep 13 00:28:28.725141 containerd[1470]: time="2025-09-13T00:28:28.724810478Z" level=error msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" failed" error="failed to destroy network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.725319 kubelet[2655]: E0913 00:28:28.725194 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:28.725319 kubelet[2655]: E0913 00:28:28.725247 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d"} Sep 13 00:28:28.725319 kubelet[2655]: E0913 00:28:28.725273 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a34000-b87c-4a54-a63c-fdd33a922040\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.725319 kubelet[2655]: E0913 00:28:28.725294 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a34000-b87c-4a54-a63c-fdd33a922040\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g9w6r" podUID="d9a34000-b87c-4a54-a63c-fdd33a922040" Sep 13 00:28:28.728778 containerd[1470]: time="2025-09-13T00:28:28.728720487Z" level=error msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" failed" error="failed to destroy network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:28:28.729002 kubelet[2655]: E0913 00:28:28.728964 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:28.729107 kubelet[2655]: E0913 00:28:28.729014 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649"} Sep 13 00:28:28.729107 kubelet[2655]: E0913 00:28:28.729046 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3df007d3-21cb-4a21-97dd-dba0e3686044\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:28:28.729107 kubelet[2655]: E0913 00:28:28.729067 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3df007d3-21cb-4a21-97dd-dba0e3686044\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9498d6585-s89kj" podUID="3df007d3-21cb-4a21-97dd-dba0e3686044" Sep 13 00:28:28.827545 sshd[3366]: Connection closed by authenticating user root 107.175.39.180 port 41956 [preauth] Sep 13 00:28:28.830923 systemd[1]: sshd@32-195.201.238.219:22-107.175.39.180:41956.service: Deactivated successfully. Sep 13 00:28:29.011172 systemd[1]: Started sshd@33-195.201.238.219:22-107.175.39.180:41966.service - OpenSSH per-connection server daemon (107.175.39.180:41966). Sep 13 00:28:31.955578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429286379.mount: Deactivated successfully. Sep 13 00:28:31.993440 containerd[1470]: time="2025-09-13T00:28:31.992892018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:31.995327 containerd[1470]: time="2025-09-13T00:28:31.995025907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:28:31.997089 containerd[1470]: time="2025-09-13T00:28:31.996993365Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:32.001779 containerd[1470]: time="2025-09-13T00:28:32.001630571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:32.002848 containerd[1470]: time="2025-09-13T00:28:32.002792749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.445410209s" Sep 13 00:28:32.003048 containerd[1470]: time="2025-09-13T00:28:32.002853560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:28:32.020550 containerd[1470]: time="2025-09-13T00:28:32.020503311Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:28:32.043249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388642692.mount: Deactivated successfully. Sep 13 00:28:32.052632 containerd[1470]: time="2025-09-13T00:28:32.052458744Z" level=info msg="CreateContainer within sandbox \"18f4aafc495d4585f006bf8eb2fa559207e938b29ccc8b8b2120a2e3b1552b17\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff\"" Sep 13 00:28:32.054760 containerd[1470]: time="2025-09-13T00:28:32.053520544Z" level=info msg="StartContainer for \"8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff\"" Sep 13 00:28:32.095202 systemd[1]: Started cri-containerd-8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff.scope - libcontainer container 8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff. Sep 13 00:28:32.141643 containerd[1470]: time="2025-09-13T00:28:32.141591623Z" level=info msg="StartContainer for \"8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff\" returns successfully" Sep 13 00:28:32.324868 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:28:32.324994 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:28:32.482766 containerd[1470]: time="2025-09-13T00:28:32.482349020Z" level=info msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" Sep 13 00:28:32.520007 sshd[3777]: Connection closed by authenticating user root 107.175.39.180 port 41966 [preauth] Sep 13 00:28:32.528951 systemd[1]: sshd@33-195.201.238.219:22-107.175.39.180:41966.service: Deactivated successfully. Sep 13 00:28:32.724209 systemd[1]: Started sshd@34-195.201.238.219:22-107.175.39.180:41982.service - OpenSSH per-connection server daemon (107.175.39.180:41982). Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.630 [INFO][3838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.630 [INFO][3838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" iface="eth0" netns="/var/run/netns/cni-1d8518ce-1fff-be8f-bc68-25b56da30e8e" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.631 [INFO][3838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" iface="eth0" netns="/var/run/netns/cni-1d8518ce-1fff-be8f-bc68-25b56da30e8e" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.632 [INFO][3838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" iface="eth0" netns="/var/run/netns/cni-1d8518ce-1fff-be8f-bc68-25b56da30e8e" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.632 [INFO][3838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.632 [INFO][3838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.740 [INFO][3853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.742 [INFO][3853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.742 [INFO][3853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.763 [WARNING][3853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.764 [INFO][3853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.767 [INFO][3853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:32.775607 containerd[1470]: 2025-09-13 00:28:32.773 [INFO][3838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:32.776438 containerd[1470]: time="2025-09-13T00:28:32.776397295Z" level=info msg="TearDown network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" successfully" Sep 13 00:28:32.776628 containerd[1470]: time="2025-09-13T00:28:32.776607334Z" level=info msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" returns successfully" Sep 13 00:28:32.946654 kubelet[2655]: I0913 00:28:32.946605 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-backend-key-pair\") pod \"de7829ee-942c-4eb2-8399-15d4e08c4967\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " Sep 13 00:28:32.948115 kubelet[2655]: I0913 00:28:32.947272 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7ktx\" (UniqueName: \"kubernetes.io/projected/de7829ee-942c-4eb2-8399-15d4e08c4967-kube-api-access-h7ktx\") pod \"de7829ee-942c-4eb2-8399-15d4e08c4967\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " Sep 13 00:28:32.948115 kubelet[2655]: I0913 00:28:32.947333 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-ca-bundle\") pod \"de7829ee-942c-4eb2-8399-15d4e08c4967\" (UID: \"de7829ee-942c-4eb2-8399-15d4e08c4967\") " Sep 13 00:28:32.948115 kubelet[2655]: I0913 00:28:32.947789 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "de7829ee-942c-4eb2-8399-15d4e08c4967" (UID: "de7829ee-942c-4eb2-8399-15d4e08c4967"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:28:32.952728 kubelet[2655]: I0913 00:28:32.952657 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7829ee-942c-4eb2-8399-15d4e08c4967-kube-api-access-h7ktx" (OuterVolumeSpecName: "kube-api-access-h7ktx") pod "de7829ee-942c-4eb2-8399-15d4e08c4967" (UID: "de7829ee-942c-4eb2-8399-15d4e08c4967"). InnerVolumeSpecName "kube-api-access-h7ktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:28:32.953163 kubelet[2655]: I0913 00:28:32.953125 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "de7829ee-942c-4eb2-8399-15d4e08c4967" (UID: "de7829ee-942c-4eb2-8399-15d4e08c4967"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:28:32.957543 systemd[1]: run-netns-cni\x2d1d8518ce\x2d1fff\x2dbe8f\x2dbc68\x2d25b56da30e8e.mount: Deactivated successfully. Sep 13 00:28:32.958881 systemd[1]: var-lib-kubelet-pods-de7829ee\x2d942c\x2d4eb2\x2d8399\x2d15d4e08c4967-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7ktx.mount: Deactivated successfully. Sep 13 00:28:32.959032 systemd[1]: var-lib-kubelet-pods-de7829ee\x2d942c\x2d4eb2\x2d8399\x2d15d4e08c4967-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:28:33.048290 kubelet[2655]: I0913 00:28:33.048032 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-9bb66b8eb5\" DevicePath \"\"" Sep 13 00:28:33.048290 kubelet[2655]: I0913 00:28:33.048111 2655 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7ktx\" (UniqueName: \"kubernetes.io/projected/de7829ee-942c-4eb2-8399-15d4e08c4967-kube-api-access-h7ktx\") on node \"ci-4081-3-5-n-9bb66b8eb5\" DevicePath \"\"" Sep 13 00:28:33.048290 kubelet[2655]: I0913 00:28:33.048135 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7829ee-942c-4eb2-8399-15d4e08c4967-whisker-ca-bundle\") on node \"ci-4081-3-5-n-9bb66b8eb5\" DevicePath \"\"" Sep 13 00:28:33.356515 systemd[1]: Removed slice kubepods-besteffort-podde7829ee_942c_4eb2_8399_15d4e08c4967.slice - libcontainer container kubepods-besteffort-podde7829ee_942c_4eb2_8399_15d4e08c4967.slice. Sep 13 00:28:33.679353 kubelet[2655]: I0913 00:28:33.679182 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z7s2x" podStartSLOduration=3.2642654 podStartE2EDuration="14.677999013s" podCreationTimestamp="2025-09-13 00:28:19 +0000 UTC" firstStartedPulling="2025-09-13 00:28:20.590570539 +0000 UTC m=+25.373601807" lastFinishedPulling="2025-09-13 00:28:32.004304152 +0000 UTC m=+36.787335420" observedRunningTime="2025-09-13 00:28:32.668079017 +0000 UTC m=+37.451110285" watchObservedRunningTime="2025-09-13 00:28:33.677999013 +0000 UTC m=+38.461030401" Sep 13 00:28:33.759547 systemd[1]: Created slice kubepods-besteffort-podffa22063_25d7_4186_97ac_0ebf912f6b77.slice - libcontainer container kubepods-besteffort-podffa22063_25d7_4186_97ac_0ebf912f6b77.slice. Sep 13 00:28:33.855246 kubelet[2655]: I0913 00:28:33.854874 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffa22063-25d7-4186-97ac-0ebf912f6b77-whisker-backend-key-pair\") pod \"whisker-66f8fff49-pjwdc\" (UID: \"ffa22063-25d7-4186-97ac-0ebf912f6b77\") " pod="calico-system/whisker-66f8fff49-pjwdc" Sep 13 00:28:33.855246 kubelet[2655]: I0913 00:28:33.854997 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58kz\" (UniqueName: \"kubernetes.io/projected/ffa22063-25d7-4186-97ac-0ebf912f6b77-kube-api-access-p58kz\") pod \"whisker-66f8fff49-pjwdc\" (UID: \"ffa22063-25d7-4186-97ac-0ebf912f6b77\") " pod="calico-system/whisker-66f8fff49-pjwdc" Sep 13 00:28:33.855246 kubelet[2655]: I0913 00:28:33.855084 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa22063-25d7-4186-97ac-0ebf912f6b77-whisker-ca-bundle\") pod \"whisker-66f8fff49-pjwdc\" (UID: \"ffa22063-25d7-4186-97ac-0ebf912f6b77\") " pod="calico-system/whisker-66f8fff49-pjwdc" Sep 13 00:28:34.067313 containerd[1470]: time="2025-09-13T00:28:34.066773741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8fff49-pjwdc,Uid:ffa22063-25d7-4186-97ac-0ebf912f6b77,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:34.367459 systemd-networkd[1372]: cali56e03a4c655: Link UP Sep 13 00:28:34.372003 systemd-networkd[1372]: cali56e03a4c655: Gained carrier Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.159 [INFO][4005] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.181 [INFO][4005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0 whisker-66f8fff49- calico-system ffa22063-25d7-4186-97ac-0ebf912f6b77 910 0 2025-09-13 00:28:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f8fff49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 whisker-66f8fff49-pjwdc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali56e03a4c655 [] [] }} ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.181 [INFO][4005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.264 [INFO][4021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" HandleID="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.264 [INFO][4021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" HandleID="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"whisker-66f8fff49-pjwdc", "timestamp":"2025-09-13 00:28:34.264140845 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.264 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.266 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.266 [INFO][4021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.288 [INFO][4021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.299 [INFO][4021] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.314 [INFO][4021] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.318 [INFO][4021] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.323 [INFO][4021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.323 [INFO][4021] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.326 [INFO][4021] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42 Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.332 [INFO][4021] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.340 [INFO][4021] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.65/26] block=192.168.58.64/26 handle="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.341 [INFO][4021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.65/26] handle="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.341 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:34.404418 containerd[1470]: 2025-09-13 00:28:34.341 [INFO][4021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.65/26] IPv6=[] ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" HandleID="k8s-pod-network.c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.345 [INFO][4005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0", GenerateName:"whisker-66f8fff49-", Namespace:"calico-system", SelfLink:"", UID:"ffa22063-25d7-4186-97ac-0ebf912f6b77", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8fff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"whisker-66f8fff49-pjwdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56e03a4c655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.345 [INFO][4005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.65/32] ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.346 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56e03a4c655 ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.369 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.370 [INFO][4005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0", GenerateName:"whisker-66f8fff49-", Namespace:"calico-system", SelfLink:"", UID:"ffa22063-25d7-4186-97ac-0ebf912f6b77", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8fff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42", Pod:"whisker-66f8fff49-pjwdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56e03a4c655", MAC:"3a:b9:49:47:cc:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:34.405810 containerd[1470]: 2025-09-13 00:28:34.400 [INFO][4005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42" Namespace="calico-system" Pod="whisker-66f8fff49-pjwdc" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--66f8fff49--pjwdc-eth0" Sep 13 00:28:34.440580 containerd[1470]: time="2025-09-13T00:28:34.439965439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:34.440580 containerd[1470]: time="2025-09-13T00:28:34.440119106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:34.440580 containerd[1470]: time="2025-09-13T00:28:34.440152592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:34.440849 containerd[1470]: time="2025-09-13T00:28:34.440265253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:34.494009 systemd[1]: Started cri-containerd-c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42.scope - libcontainer container c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42. Sep 13 00:28:34.583557 containerd[1470]: time="2025-09-13T00:28:34.583473985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8fff49-pjwdc,Uid:ffa22063-25d7-4186-97ac-0ebf912f6b77,Namespace:calico-system,Attempt:0,} returns sandbox id \"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42\"" Sep 13 00:28:34.590038 containerd[1470]: time="2025-09-13T00:28:34.589982074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:28:34.719721 kernel: bpftool[4132]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:28:34.979402 systemd-networkd[1372]: vxlan.calico: Link UP Sep 13 00:28:34.979412 systemd-networkd[1372]: vxlan.calico: Gained carrier Sep 13 00:28:35.358395 kubelet[2655]: I0913 00:28:35.354786 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7829ee-942c-4eb2-8399-15d4e08c4967" path="/var/lib/kubelet/pods/de7829ee-942c-4eb2-8399-15d4e08c4967/volumes" Sep 13 00:28:35.754492 systemd-networkd[1372]: cali56e03a4c655: Gained IPv6LL Sep 13 00:28:35.982891 containerd[1470]: time="2025-09-13T00:28:35.982826996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:35.984860 containerd[1470]: time="2025-09-13T00:28:35.984791702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:28:35.986399 containerd[1470]: time="2025-09-13T00:28:35.986304368Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:35.989354 containerd[1470]: time="2025-09-13T00:28:35.989266730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:35.990478 containerd[1470]: time="2025-09-13T00:28:35.990370684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.400328399s" Sep 13 00:28:35.990801 containerd[1470]: time="2025-09-13T00:28:35.990640172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:28:36.000143 containerd[1470]: time="2025-09-13T00:28:35.999780701Z" level=info msg="CreateContainer within sandbox \"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:28:36.022019 containerd[1470]: time="2025-09-13T00:28:36.021855361Z" level=info msg="CreateContainer within sandbox \"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"feeeeef363b6fe5225af448d017f3ff4edd964374ed9dd62260e7672a9c39ec1\"" Sep 13 00:28:36.025123 containerd[1470]: time="2025-09-13T00:28:36.024837396Z" level=info msg="StartContainer for \"feeeeef363b6fe5225af448d017f3ff4edd964374ed9dd62260e7672a9c39ec1\"" Sep 13 00:28:36.074956 systemd[1]: Started cri-containerd-feeeeef363b6fe5225af448d017f3ff4edd964374ed9dd62260e7672a9c39ec1.scope - libcontainer container feeeeef363b6fe5225af448d017f3ff4edd964374ed9dd62260e7672a9c39ec1. Sep 13 00:28:36.126738 containerd[1470]: time="2025-09-13T00:28:36.125362482Z" level=info msg="StartContainer for \"feeeeef363b6fe5225af448d017f3ff4edd964374ed9dd62260e7672a9c39ec1\" returns successfully" Sep 13 00:28:36.128990 containerd[1470]: time="2025-09-13T00:28:36.128837163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:28:36.614991 sshd[3880]: Connection closed by authenticating user root 107.175.39.180 port 41982 [preauth] Sep 13 00:28:36.614329 systemd[1]: sshd@34-195.201.238.219:22-107.175.39.180:41982.service: Deactivated successfully. Sep 13 00:28:36.763057 systemd[1]: Started sshd@35-195.201.238.219:22-107.175.39.180:33266.service - OpenSSH per-connection server daemon (107.175.39.180:33266). Sep 13 00:28:36.842645 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Sep 13 00:28:38.029602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17680125.mount: Deactivated successfully. Sep 13 00:28:38.052471 containerd[1470]: time="2025-09-13T00:28:38.051295907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:38.052471 containerd[1470]: time="2025-09-13T00:28:38.052112283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:28:38.053270 containerd[1470]: time="2025-09-13T00:28:38.053230510Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:38.056543 containerd[1470]: time="2025-09-13T00:28:38.056487853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:38.058109 containerd[1470]: time="2025-09-13T00:28:38.058057514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.928323276s" Sep 13 00:28:38.058290 containerd[1470]: time="2025-09-13T00:28:38.058270790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:28:38.073149 containerd[1470]: time="2025-09-13T00:28:38.073103262Z" level=info msg="CreateContainer within sandbox \"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:28:38.094371 containerd[1470]: time="2025-09-13T00:28:38.094218181Z" level=info msg="CreateContainer within sandbox \"c099965270b1d86a1e465596372a5437db6c91b4a04ca927c09f17819ccaee42\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"096168d7938f5f866380fdcb36971fe67f0de258221ad4042281d8b0abd1aa3f\"" Sep 13 00:28:38.105406 containerd[1470]: time="2025-09-13T00:28:38.103983608Z" level=info msg="StartContainer for \"096168d7938f5f866380fdcb36971fe67f0de258221ad4042281d8b0abd1aa3f\"" Sep 13 00:28:38.135957 systemd[1]: Started cri-containerd-096168d7938f5f866380fdcb36971fe67f0de258221ad4042281d8b0abd1aa3f.scope - libcontainer container 096168d7938f5f866380fdcb36971fe67f0de258221ad4042281d8b0abd1aa3f. Sep 13 00:28:38.183283 containerd[1470]: time="2025-09-13T00:28:38.183196810Z" level=info msg="StartContainer for \"096168d7938f5f866380fdcb36971fe67f0de258221ad4042281d8b0abd1aa3f\" returns successfully" Sep 13 00:28:39.850176 sshd[4254]: Connection closed by authenticating user root 107.175.39.180 port 33266 [preauth] Sep 13 00:28:39.852560 systemd[1]: sshd@35-195.201.238.219:22-107.175.39.180:33266.service: Deactivated successfully. Sep 13 00:28:40.025118 systemd[1]: Started sshd@36-195.201.238.219:22-107.175.39.180:33270.service - OpenSSH per-connection server daemon (107.175.39.180:33270). Sep 13 00:28:40.351469 containerd[1470]: time="2025-09-13T00:28:40.349769227Z" level=info msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" Sep 13 00:28:40.351469 containerd[1470]: time="2025-09-13T00:28:40.349989183Z" level=info msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" Sep 13 00:28:40.353573 containerd[1470]: time="2025-09-13T00:28:40.353299837Z" level=info msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" Sep 13 00:28:40.447168 kubelet[2655]: I0913 00:28:40.446740 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66f8fff49-pjwdc" podStartSLOduration=3.975824456 podStartE2EDuration="7.446718106s" podCreationTimestamp="2025-09-13 00:28:33 +0000 UTC" firstStartedPulling="2025-09-13 00:28:34.588439637 +0000 UTC m=+39.371470905" lastFinishedPulling="2025-09-13 00:28:38.059333287 +0000 UTC m=+42.842364555" observedRunningTime="2025-09-13 00:28:38.705545147 +0000 UTC m=+43.488576495" watchObservedRunningTime="2025-09-13 00:28:40.446718106 +0000 UTC m=+45.229749374" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.438 [INFO][4343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.440 [INFO][4343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" iface="eth0" netns="/var/run/netns/cni-d9fc5665-d67b-91f9-1c5b-7eafa8834cbb" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.440 [INFO][4343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" iface="eth0" netns="/var/run/netns/cni-d9fc5665-d67b-91f9-1c5b-7eafa8834cbb" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.441 [INFO][4343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" iface="eth0" netns="/var/run/netns/cni-d9fc5665-d67b-91f9-1c5b-7eafa8834cbb" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.441 [INFO][4343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.441 [INFO][4343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.504 [WARNING][4361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.504 [INFO][4361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.517 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:40.524696 containerd[1470]: 2025-09-13 00:28:40.519 [INFO][4343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:40.527790 containerd[1470]: time="2025-09-13T00:28:40.527737255Z" level=info msg="TearDown network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" successfully" Sep 13 00:28:40.527790 containerd[1470]: time="2025-09-13T00:28:40.527788384Z" level=info msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" returns successfully" Sep 13 00:28:40.532958 systemd[1]: run-netns-cni\x2dd9fc5665\x2dd67b\x2d91f9\x2d1c5b\x2d7eafa8834cbb.mount: Deactivated successfully. Sep 13 00:28:40.536799 containerd[1470]: time="2025-09-13T00:28:40.536759591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g9w6r,Uid:d9a34000-b87c-4a54-a63c-fdd33a922040,Namespace:kube-system,Attempt:1,}" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.470 [INFO][4342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.471 [INFO][4342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" iface="eth0" netns="/var/run/netns/cni-a1f71afd-1ccc-3c09-f02f-f973cf811bae" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.471 [INFO][4342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" iface="eth0" netns="/var/run/netns/cni-a1f71afd-1ccc-3c09-f02f-f973cf811bae" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.471 [INFO][4342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" iface="eth0" netns="/var/run/netns/cni-a1f71afd-1ccc-3c09-f02f-f973cf811bae" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.471 [INFO][4342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.471 [INFO][4342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.519 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.520 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.520 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.534 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.534 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.539 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:40.545211 containerd[1470]: 2025-09-13 00:28:40.542 [INFO][4342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:40.546478 containerd[1470]: time="2025-09-13T00:28:40.546441312Z" level=info msg="TearDown network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" successfully" Sep 13 00:28:40.546553 containerd[1470]: time="2025-09-13T00:28:40.546539128Z" level=info msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" returns successfully" Sep 13 00:28:40.549561 systemd[1]: run-netns-cni\x2da1f71afd\x2d1ccc\x2d3c09\x2df02f\x2df973cf811bae.mount: Deactivated successfully. Sep 13 00:28:40.550651 containerd[1470]: time="2025-09-13T00:28:40.550605584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-msr2v,Uid:ef73e584-e419-4d03-b7f7-96ac3d16f498,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.479 [INFO][4341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.480 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" iface="eth0" netns="/var/run/netns/cni-9da5795c-1574-56c2-9b1d-45bcec2f0a8b" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" iface="eth0" netns="/var/run/netns/cni-9da5795c-1574-56c2-9b1d-45bcec2f0a8b" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" iface="eth0" netns="/var/run/netns/cni-9da5795c-1574-56c2-9b1d-45bcec2f0a8b" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.482 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.547 [INFO][4373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.547 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.547 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.559 [WARNING][4373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.559 [INFO][4373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.563 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:40.574175 containerd[1470]: 2025-09-13 00:28:40.568 [INFO][4341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:40.578339 containerd[1470]: time="2025-09-13T00:28:40.577914949Z" level=info msg="TearDown network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" successfully" Sep 13 00:28:40.578339 containerd[1470]: time="2025-09-13T00:28:40.577953956Z" level=info msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" returns successfully" Sep 13 00:28:40.579492 containerd[1470]: time="2025-09-13T00:28:40.579057774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvd2b,Uid:b312368b-a0fb-41a9-8fcf-b787f4bcfe2e,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:40.580297 systemd[1]: run-netns-cni\x2d9da5795c\x2d1574\x2d56c2\x2d9b1d\x2d45bcec2f0a8b.mount: Deactivated successfully. Sep 13 00:28:40.817587 systemd-networkd[1372]: calie729a0871ed: Link UP Sep 13 00:28:40.819541 systemd-networkd[1372]: calie729a0871ed: Gained carrier Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.645 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0 coredns-7c65d6cfc9- kube-system d9a34000-b87c-4a54-a63c-fdd33a922040 951 0 2025-09-13 00:28:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 coredns-7c65d6cfc9-g9w6r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie729a0871ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.645 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.729 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" HandleID="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.729 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" HandleID="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b3c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"coredns-7c65d6cfc9-g9w6r", "timestamp":"2025-09-13 00:28:40.728051048 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.729 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.729 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.729 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.757 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.767 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.776 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.780 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.784 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.784 [INFO][4420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.786 [INFO][4420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.792 [INFO][4420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.66/26] block=192.168.58.64/26 handle="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.66/26] handle="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:40.837897 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.66/26] IPv6=[] ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" HandleID="k8s-pod-network.821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.804 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9a34000-b87c-4a54-a63c-fdd33a922040", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"coredns-7c65d6cfc9-g9w6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie729a0871ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.804 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.66/32] ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.804 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie729a0871ed ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.820 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.821 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9a34000-b87c-4a54-a63c-fdd33a922040", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae", Pod:"coredns-7c65d6cfc9-g9w6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie729a0871ed", MAC:"f2:7a:9d:9b:0b:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:40.839068 containerd[1470]: 2025-09-13 00:28:40.835 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g9w6r" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:40.863181 containerd[1470]: time="2025-09-13T00:28:40.862544743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:40.863512 containerd[1470]: time="2025-09-13T00:28:40.863279621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:40.863512 containerd[1470]: time="2025-09-13T00:28:40.863305866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:40.864534 containerd[1470]: time="2025-09-13T00:28:40.863505378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:40.892483 systemd[1]: Started cri-containerd-821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae.scope - libcontainer container 821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae. Sep 13 00:28:40.924633 systemd-networkd[1372]: caliee6e07a32e3: Link UP Sep 13 00:28:40.927072 systemd-networkd[1372]: caliee6e07a32e3: Gained carrier Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.707 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0 csi-node-driver- calico-system b312368b-a0fb-41a9-8fcf-b787f4bcfe2e 953 0 2025-09-13 00:28:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 csi-node-driver-mvd2b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee6e07a32e3 [] [] }} ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.707 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.748 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" HandleID="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.749 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" HandleID="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003305f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"csi-node-driver-mvd2b", "timestamp":"2025-09-13 00:28:40.748751907 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.749 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.801 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.857 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.867 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.877 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.881 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.888 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.888 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.891 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23 Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.901 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.913 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.67/26] block=192.168.58.64/26 handle="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.913 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.67/26] handle="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.913 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:40.957084 containerd[1470]: 2025-09-13 00:28:40.913 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.67/26] IPv6=[] ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" HandleID="k8s-pod-network.b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.919 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"csi-node-driver-mvd2b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee6e07a32e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.919 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.67/32] ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.919 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee6e07a32e3 ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.926 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.928 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23", Pod:"csi-node-driver-mvd2b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee6e07a32e3", MAC:"c2:76:51:61:61:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:40.958008 containerd[1470]: 2025-09-13 00:28:40.947 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23" Namespace="calico-system" Pod="csi-node-driver-mvd2b" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:40.976432 containerd[1470]: time="2025-09-13T00:28:40.976386987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g9w6r,Uid:d9a34000-b87c-4a54-a63c-fdd33a922040,Namespace:kube-system,Attempt:1,} returns sandbox id \"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae\"" Sep 13 00:28:40.988934 containerd[1470]: time="2025-09-13T00:28:40.988883642Z" level=info msg="CreateContainer within sandbox \"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:28:41.018866 containerd[1470]: time="2025-09-13T00:28:41.018681448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:41.018866 containerd[1470]: time="2025-09-13T00:28:41.018819550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:41.018866 containerd[1470]: time="2025-09-13T00:28:41.018832352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.019130 containerd[1470]: time="2025-09-13T00:28:41.018927847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.022751 containerd[1470]: time="2025-09-13T00:28:41.022665041Z" level=info msg="CreateContainer within sandbox \"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd601a588a42379a1a856b886e9565be2e2d61f43717c856311131211bfbb4b3\"" Sep 13 00:28:41.026932 containerd[1470]: time="2025-09-13T00:28:41.025856148Z" level=info msg="StartContainer for \"bd601a588a42379a1a856b886e9565be2e2d61f43717c856311131211bfbb4b3\"" Sep 13 00:28:41.045001 systemd[1]: Started cri-containerd-b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23.scope - libcontainer container b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23. Sep 13 00:28:41.048317 systemd-networkd[1372]: calibd25054ba33: Link UP Sep 13 00:28:41.050238 systemd-networkd[1372]: calibd25054ba33: Gained carrier Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.705 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0 calico-apiserver-7bd7f85854- calico-apiserver ef73e584-e419-4d03-b7f7-96ac3d16f498 952 0 2025-09-13 00:28:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd7f85854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 calico-apiserver-7bd7f85854-msr2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd25054ba33 [] [] }} ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.705 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.782 [INFO][4433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" HandleID="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.782 [INFO][4433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" HandleID="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"calico-apiserver-7bd7f85854-msr2v", "timestamp":"2025-09-13 00:28:40.782331804 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.782 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.914 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.915 [INFO][4433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.962 [INFO][4433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.977 [INFO][4433] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.994 [INFO][4433] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:40.998 [INFO][4433] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.003 [INFO][4433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.003 [INFO][4433] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.007 [INFO][4433] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597 Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.012 [INFO][4433] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.026 [INFO][4433] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.68/26] block=192.168.58.64/26 handle="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.027 [INFO][4433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.68/26] handle="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.029 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:41.090317 containerd[1470]: 2025-09-13 00:28:41.029 [INFO][4433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.68/26] IPv6=[] ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" HandleID="k8s-pod-network.1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.040 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef73e584-e419-4d03-b7f7-96ac3d16f498", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"calico-apiserver-7bd7f85854-msr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd25054ba33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.041 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.68/32] ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.041 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd25054ba33 ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.051 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.054 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef73e584-e419-4d03-b7f7-96ac3d16f498", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597", Pod:"calico-apiserver-7bd7f85854-msr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd25054ba33", MAC:"16:89:c5:d0:db:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:41.092999 containerd[1470]: 2025-09-13 00:28:41.075 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-msr2v" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:41.113833 systemd[1]: Started cri-containerd-bd601a588a42379a1a856b886e9565be2e2d61f43717c856311131211bfbb4b3.scope - libcontainer container bd601a588a42379a1a856b886e9565be2e2d61f43717c856311131211bfbb4b3. Sep 13 00:28:41.140869 containerd[1470]: time="2025-09-13T00:28:41.140796009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvd2b,Uid:b312368b-a0fb-41a9-8fcf-b787f4bcfe2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23\"" Sep 13 00:28:41.145105 containerd[1470]: time="2025-09-13T00:28:41.145066608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:28:41.146766 containerd[1470]: time="2025-09-13T00:28:41.143022723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:41.146766 containerd[1470]: time="2025-09-13T00:28:41.146401820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:41.146766 containerd[1470]: time="2025-09-13T00:28:41.146419023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.146766 containerd[1470]: time="2025-09-13T00:28:41.146533201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.170649 containerd[1470]: time="2025-09-13T00:28:41.170569780Z" level=info msg="StartContainer for \"bd601a588a42379a1a856b886e9565be2e2d61f43717c856311131211bfbb4b3\" returns successfully" Sep 13 00:28:41.183127 systemd[1]: Started cri-containerd-1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597.scope - libcontainer container 1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597. Sep 13 00:28:41.227113 containerd[1470]: time="2025-09-13T00:28:41.227064276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-msr2v,Uid:ef73e584-e419-4d03-b7f7-96ac3d16f498,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597\"" Sep 13 00:28:41.350515 containerd[1470]: time="2025-09-13T00:28:41.347780095Z" level=info msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.419 [INFO][4639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.419 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" iface="eth0" netns="/var/run/netns/cni-abcc18fa-050d-3b78-5d19-ed734e173dcd" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.421 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" iface="eth0" netns="/var/run/netns/cni-abcc18fa-050d-3b78-5d19-ed734e173dcd" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.422 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" iface="eth0" netns="/var/run/netns/cni-abcc18fa-050d-3b78-5d19-ed734e173dcd" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.423 [INFO][4639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.423 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.453 [INFO][4646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.454 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.454 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.466 [WARNING][4646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.467 [INFO][4646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.472 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:41.479052 containerd[1470]: 2025-09-13 00:28:41.475 [INFO][4639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:41.482002 containerd[1470]: time="2025-09-13T00:28:41.479037509Z" level=info msg="TearDown network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" successfully" Sep 13 00:28:41.482002 containerd[1470]: time="2025-09-13T00:28:41.479097839Z" level=info msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" returns successfully" Sep 13 00:28:41.482002 containerd[1470]: time="2025-09-13T00:28:41.480271945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wk28,Uid:a568f78d-444c-478c-ac80-6c89593927f2,Namespace:kube-system,Attempt:1,}" Sep 13 00:28:41.545998 systemd[1]: run-netns-cni\x2dabcc18fa\x2d050d\x2d3b78\x2d5d19\x2ded734e173dcd.mount: Deactivated successfully. Sep 13 00:28:41.700862 systemd-networkd[1372]: cali1671035a74a: Link UP Sep 13 00:28:41.704483 systemd-networkd[1372]: cali1671035a74a: Gained carrier Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.581 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0 coredns-7c65d6cfc9- kube-system a568f78d-444c-478c-ac80-6c89593927f2 970 0 2025-09-13 00:28:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 coredns-7c65d6cfc9-2wk28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1671035a74a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.581 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.617 [INFO][4664] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" HandleID="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.617 [INFO][4664] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" HandleID="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"coredns-7c65d6cfc9-2wk28", "timestamp":"2025-09-13 00:28:41.617615566 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.618 [INFO][4664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.618 [INFO][4664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.618 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.634 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.643 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.651 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.656 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.661 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.661 [INFO][4664] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.664 [INFO][4664] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.670 [INFO][4664] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.684 [INFO][4664] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.69/26] block=192.168.58.64/26 handle="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.685 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.69/26] handle="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.685 [INFO][4664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:41.745873 containerd[1470]: 2025-09-13 00:28:41.685 [INFO][4664] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.69/26] IPv6=[] ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" HandleID="k8s-pod-network.c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.688 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a568f78d-444c-478c-ac80-6c89593927f2", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"coredns-7c65d6cfc9-2wk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1671035a74a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.688 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.69/32] ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.688 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1671035a74a ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.704 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.708 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a568f78d-444c-478c-ac80-6c89593927f2", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e", Pod:"coredns-7c65d6cfc9-2wk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1671035a74a", MAC:"f6:bf:e8:fc:3c:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:41.746545 containerd[1470]: 2025-09-13 00:28:41.741 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2wk28" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:41.778171 kubelet[2655]: I0913 00:28:41.778076 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g9w6r" podStartSLOduration=39.778051136 podStartE2EDuration="39.778051136s" podCreationTimestamp="2025-09-13 00:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:41.776948921 +0000 UTC m=+46.559980149" watchObservedRunningTime="2025-09-13 00:28:41.778051136 +0000 UTC m=+46.561082364" Sep 13 00:28:41.794000 containerd[1470]: time="2025-09-13T00:28:41.793425019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:41.794000 containerd[1470]: time="2025-09-13T00:28:41.793648575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:41.794000 containerd[1470]: time="2025-09-13T00:28:41.793681100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.794000 containerd[1470]: time="2025-09-13T00:28:41.793788757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:41.851068 systemd[1]: Started cri-containerd-c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e.scope - libcontainer container c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e. Sep 13 00:28:41.939051 containerd[1470]: time="2025-09-13T00:28:41.938972984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wk28,Uid:a568f78d-444c-478c-ac80-6c89593927f2,Namespace:kube-system,Attempt:1,} returns sandbox id \"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e\"" Sep 13 00:28:41.947969 containerd[1470]: time="2025-09-13T00:28:41.947907283Z" level=info msg="CreateContainer within sandbox \"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:28:41.979766 containerd[1470]: time="2025-09-13T00:28:41.979611120Z" level=info msg="CreateContainer within sandbox \"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a9706dc6445676046aee3cfbfe1b1c80ec6c3ae24d9db5dcee62727478c1fc2\"" Sep 13 00:28:41.982918 containerd[1470]: time="2025-09-13T00:28:41.982719534Z" level=info msg="StartContainer for \"6a9706dc6445676046aee3cfbfe1b1c80ec6c3ae24d9db5dcee62727478c1fc2\"" Sep 13 00:28:42.023968 systemd[1]: Started cri-containerd-6a9706dc6445676046aee3cfbfe1b1c80ec6c3ae24d9db5dcee62727478c1fc2.scope - libcontainer container 6a9706dc6445676046aee3cfbfe1b1c80ec6c3ae24d9db5dcee62727478c1fc2. Sep 13 00:28:42.057411 containerd[1470]: time="2025-09-13T00:28:42.057351465Z" level=info msg="StartContainer for \"6a9706dc6445676046aee3cfbfe1b1c80ec6c3ae24d9db5dcee62727478c1fc2\" returns successfully" Sep 13 00:28:42.218142 systemd-networkd[1372]: caliee6e07a32e3: Gained IPv6LL Sep 13 00:28:42.349151 containerd[1470]: time="2025-09-13T00:28:42.348743698Z" level=info msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.419 [INFO][4775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.419 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" iface="eth0" netns="/var/run/netns/cni-fb015619-0602-cca0-3120-cb1fe27d0fb6" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.419 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" iface="eth0" netns="/var/run/netns/cni-fb015619-0602-cca0-3120-cb1fe27d0fb6" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.421 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" iface="eth0" netns="/var/run/netns/cni-fb015619-0602-cca0-3120-cb1fe27d0fb6" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.421 [INFO][4775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.421 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.453 [INFO][4782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.453 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.453 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.468 [WARNING][4782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.468 [INFO][4782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.473 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:42.480197 containerd[1470]: 2025-09-13 00:28:42.477 [INFO][4775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:42.482859 containerd[1470]: time="2025-09-13T00:28:42.480315663Z" level=info msg="TearDown network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" successfully" Sep 13 00:28:42.482859 containerd[1470]: time="2025-09-13T00:28:42.480347748Z" level=info msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" returns successfully" Sep 13 00:28:42.482859 containerd[1470]: time="2025-09-13T00:28:42.482325537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9498d6585-s89kj,Uid:3df007d3-21cb-4a21-97dd-dba0e3686044,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:42.536537 systemd[1]: run-netns-cni\x2dfb015619\x2d0602\x2dcca0\x2d3120\x2dcb1fe27d0fb6.mount: Deactivated successfully. Sep 13 00:28:42.603045 systemd-networkd[1372]: calibd25054ba33: Gained IPv6LL Sep 13 00:28:42.724051 containerd[1470]: time="2025-09-13T00:28:42.723998544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:42.729212 systemd-networkd[1372]: califd6e3c36c71: Link UP Sep 13 00:28:42.729446 systemd-networkd[1372]: califd6e3c36c71: Gained carrier Sep 13 00:28:42.733026 containerd[1470]: time="2025-09-13T00:28:42.731375459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:28:42.735706 containerd[1470]: time="2025-09-13T00:28:42.734488787Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.593 [INFO][4791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0 calico-kube-controllers-9498d6585- calico-system 3df007d3-21cb-4a21-97dd-dba0e3686044 988 0 2025-09-13 00:28:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9498d6585 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 calico-kube-controllers-9498d6585-s89kj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd6e3c36c71 [] [] }} ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.593 [INFO][4791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.647 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" HandleID="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.648 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" HandleID="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"calico-kube-controllers-9498d6585-s89kj", "timestamp":"2025-09-13 00:28:42.647610622 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.648 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.648 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.648 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.664 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.672 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.680 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.683 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.694 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.694 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.699 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7 Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.707 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.720 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.70/26] block=192.168.58.64/26 handle="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.720 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.70/26] handle="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.720 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:42.773102 containerd[1470]: 2025-09-13 00:28:42.720 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.70/26] IPv6=[] ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" HandleID="k8s-pod-network.d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.722 [INFO][4791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0", GenerateName:"calico-kube-controllers-9498d6585-", Namespace:"calico-system", SelfLink:"", UID:"3df007d3-21cb-4a21-97dd-dba0e3686044", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9498d6585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"calico-kube-controllers-9498d6585-s89kj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd6e3c36c71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.723 [INFO][4791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.70/32] ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.723 [INFO][4791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd6e3c36c71 ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.726 [INFO][4791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.727 [INFO][4791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0", GenerateName:"calico-kube-controllers-9498d6585-", Namespace:"calico-system", SelfLink:"", UID:"3df007d3-21cb-4a21-97dd-dba0e3686044", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9498d6585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7", Pod:"calico-kube-controllers-9498d6585-s89kj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd6e3c36c71", MAC:"5e:70:47:c1:ac:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:42.773909 containerd[1470]: 2025-09-13 00:28:42.751 [INFO][4791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7" Namespace="calico-system" Pod="calico-kube-controllers-9498d6585-s89kj" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:42.785612 containerd[1470]: time="2025-09-13T00:28:42.785454088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:42.787861 containerd[1470]: time="2025-09-13T00:28:42.787796695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.642221807s" Sep 13 00:28:42.787861 containerd[1470]: time="2025-09-13T00:28:42.787855664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:28:42.795226 systemd-networkd[1372]: calie729a0871ed: Gained IPv6LL Sep 13 00:28:42.798087 containerd[1470]: time="2025-09-13T00:28:42.794951776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:28:42.799108 containerd[1470]: time="2025-09-13T00:28:42.796692448Z" level=info msg="CreateContainer within sandbox \"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:28:42.831860 kubelet[2655]: I0913 00:28:42.831055 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2wk28" podStartSLOduration=40.831033786 podStartE2EDuration="40.831033786s" podCreationTimestamp="2025-09-13 00:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:42.828106288 +0000 UTC m=+47.611137556" watchObservedRunningTime="2025-09-13 00:28:42.831033786 +0000 UTC m=+47.614065054" Sep 13 00:28:42.854500 containerd[1470]: time="2025-09-13T00:28:42.854135684Z" level=info msg="CreateContainer within sandbox \"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4\"" Sep 13 00:28:42.861022 containerd[1470]: time="2025-09-13T00:28:42.860924747Z" level=info msg="StartContainer for \"3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4\"" Sep 13 00:28:42.876510 containerd[1470]: time="2025-09-13T00:28:42.876066759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:42.876510 containerd[1470]: time="2025-09-13T00:28:42.876144091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:42.876510 containerd[1470]: time="2025-09-13T00:28:42.876160293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:42.876993 containerd[1470]: time="2025-09-13T00:28:42.876563396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:42.954300 systemd[1]: Started cri-containerd-3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4.scope - libcontainer container 3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4. Sep 13 00:28:42.964239 systemd[1]: Started cri-containerd-d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7.scope - libcontainer container d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7. Sep 13 00:28:43.020977 containerd[1470]: time="2025-09-13T00:28:43.020934762Z" level=info msg="StartContainer for \"3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4\" returns successfully" Sep 13 00:28:43.046919 containerd[1470]: time="2025-09-13T00:28:43.046732187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9498d6585-s89kj,Uid:3df007d3-21cb-4a21-97dd-dba0e3686044,Namespace:calico-system,Attempt:1,} returns sandbox id \"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7\"" Sep 13 00:28:43.309816 systemd-networkd[1372]: cali1671035a74a: Gained IPv6LL Sep 13 00:28:43.352516 containerd[1470]: time="2025-09-13T00:28:43.349973668Z" level=info msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.435 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.436 [INFO][4907] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" iface="eth0" netns="/var/run/netns/cni-cb3e00f3-352e-02e7-c7e6-2390ba54c173" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.436 [INFO][4907] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" iface="eth0" netns="/var/run/netns/cni-cb3e00f3-352e-02e7-c7e6-2390ba54c173" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.438 [INFO][4907] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" iface="eth0" netns="/var/run/netns/cni-cb3e00f3-352e-02e7-c7e6-2390ba54c173" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.438 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.438 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.461 [INFO][4914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.461 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.461 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.474 [WARNING][4914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.474 [INFO][4914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.482 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:43.490011 containerd[1470]: 2025-09-13 00:28:43.486 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:43.491478 containerd[1470]: time="2025-09-13T00:28:43.490295743Z" level=info msg="TearDown network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" successfully" Sep 13 00:28:43.491478 containerd[1470]: time="2025-09-13T00:28:43.490332829Z" level=info msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" returns successfully" Sep 13 00:28:43.491563 containerd[1470]: time="2025-09-13T00:28:43.491474726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-65946,Uid:10ee540c-0324-4579-8d1e-34d8475f5cac,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:43.531131 systemd[1]: run-containerd-runc-k8s.io-3278d42534ca973a7ee3b1353400a8ffce29e10fe075b3c99e911c3e8ba50be4-runc.a0QWwY.mount: Deactivated successfully. Sep 13 00:28:43.531251 systemd[1]: run-netns-cni\x2dcb3e00f3\x2d352e\x2d02e7\x2dc7e6\x2d2390ba54c173.mount: Deactivated successfully. Sep 13 00:28:43.658716 systemd-networkd[1372]: cali9db30a2c8b9: Link UP Sep 13 00:28:43.659113 systemd-networkd[1372]: cali9db30a2c8b9: Gained carrier Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.558 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0 goldmane-7988f88666- calico-system 10ee540c-0324-4579-8d1e-34d8475f5cac 1007 0 2025-09-13 00:28:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 goldmane-7988f88666-65946 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9db30a2c8b9 [] [] }} ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.558 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.590 [INFO][4932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" HandleID="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.590 [INFO][4932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" HandleID="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"goldmane-7988f88666-65946", "timestamp":"2025-09-13 00:28:43.590144047 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.590 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.590 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.590 [INFO][4932] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.602 [INFO][4932] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.609 [INFO][4932] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.619 [INFO][4932] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.622 [INFO][4932] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.626 [INFO][4932] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.626 [INFO][4932] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.629 [INFO][4932] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23 Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.636 [INFO][4932] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.650 [INFO][4932] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.71/26] block=192.168.58.64/26 handle="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.650 [INFO][4932] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.71/26] handle="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.650 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:43.682346 containerd[1470]: 2025-09-13 00:28:43.650 [INFO][4932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.71/26] IPv6=[] ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" HandleID="k8s-pod-network.541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.653 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"10ee540c-0324-4579-8d1e-34d8475f5cac", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"goldmane-7988f88666-65946", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9db30a2c8b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.653 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.71/32] ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.654 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9db30a2c8b9 ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.659 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.660 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"10ee540c-0324-4579-8d1e-34d8475f5cac", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23", Pod:"goldmane-7988f88666-65946", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9db30a2c8b9", MAC:"b6:62:22:9f:6e:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:43.684583 containerd[1470]: 2025-09-13 00:28:43.678 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23" Namespace="calico-system" Pod="goldmane-7988f88666-65946" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:43.710896 containerd[1470]: time="2025-09-13T00:28:43.710471434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:43.710896 containerd[1470]: time="2025-09-13T00:28:43.710771720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:43.710896 containerd[1470]: time="2025-09-13T00:28:43.710789403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:43.712089 containerd[1470]: time="2025-09-13T00:28:43.711848046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:43.746710 systemd[1]: run-containerd-runc-k8s.io-541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23-runc.ocR3iE.mount: Deactivated successfully. Sep 13 00:28:43.757982 systemd[1]: Started cri-containerd-541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23.scope - libcontainer container 541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23. Sep 13 00:28:43.807326 containerd[1470]: time="2025-09-13T00:28:43.807244502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-65946,Uid:10ee540c-0324-4579-8d1e-34d8475f5cac,Namespace:calico-system,Attempt:1,} returns sandbox id \"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23\"" Sep 13 00:28:43.947007 systemd-networkd[1372]: califd6e3c36c71: Gained IPv6LL Sep 13 00:28:44.240114 sshd[4305]: Connection closed by authenticating user root 107.175.39.180 port 33270 [preauth] Sep 13 00:28:44.242043 systemd[1]: sshd@36-195.201.238.219:22-107.175.39.180:33270.service: Deactivated successfully. Sep 13 00:28:44.347904 containerd[1470]: time="2025-09-13T00:28:44.347852634Z" level=info msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" Sep 13 00:28:44.412083 systemd[1]: Started sshd@37-195.201.238.219:22-107.175.39.180:33282.service - OpenSSH per-connection server daemon (107.175.39.180:33282). Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.496 [INFO][5004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.497 [INFO][5004] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" iface="eth0" netns="/var/run/netns/cni-d1ee2c6a-3135-2be1-9150-d151c5a9d0d5" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.498 [INFO][5004] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" iface="eth0" netns="/var/run/netns/cni-d1ee2c6a-3135-2be1-9150-d151c5a9d0d5" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.499 [INFO][5004] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" iface="eth0" netns="/var/run/netns/cni-d1ee2c6a-3135-2be1-9150-d151c5a9d0d5" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.499 [INFO][5004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.499 [INFO][5004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.530 [INFO][5016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.531 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.531 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.543 [WARNING][5016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.544 [INFO][5016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.547 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:44.553873 containerd[1470]: 2025-09-13 00:28:44.550 [INFO][5004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:44.557452 containerd[1470]: time="2025-09-13T00:28:44.557367298Z" level=info msg="TearDown network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" successfully" Sep 13 00:28:44.557452 containerd[1470]: time="2025-09-13T00:28:44.557409425Z" level=info msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" returns successfully" Sep 13 00:28:44.560341 systemd[1]: run-netns-cni\x2dd1ee2c6a\x2d3135\x2d2be1\x2d9150\x2dd151c5a9d0d5.mount: Deactivated successfully. Sep 13 00:28:44.561226 containerd[1470]: time="2025-09-13T00:28:44.561179879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-x7mgg,Uid:c520c01d-eab5-4613-a97d-e129d2838442,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:28:44.832072 systemd-networkd[1372]: cali31a4b8ecb55: Link UP Sep 13 00:28:44.833062 systemd-networkd[1372]: cali31a4b8ecb55: Gained carrier Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.674 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0 calico-apiserver-7bd7f85854- calico-apiserver c520c01d-eab5-4613-a97d-e129d2838442 1017 0 2025-09-13 00:28:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd7f85854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-9bb66b8eb5 calico-apiserver-7bd7f85854-x7mgg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31a4b8ecb55 [] [] }} ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.674 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.736 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" HandleID="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.736 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" HandleID="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-9bb66b8eb5", "pod":"calico-apiserver-7bd7f85854-x7mgg", "timestamp":"2025-09-13 00:28:44.736525374 +0000 UTC"}, Hostname:"ci-4081-3-5-n-9bb66b8eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.737 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.737 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.737 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-9bb66b8eb5' Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.752 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.764 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.775 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.781 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.787 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.788 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.793 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355 Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.804 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.817 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.58.72/26] block=192.168.58.64/26 handle="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.817 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.72/26] handle="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" host="ci-4081-3-5-n-9bb66b8eb5" Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.817 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:44.861817 containerd[1470]: 2025-09-13 00:28:44.817 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.72/26] IPv6=[] ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" HandleID="k8s-pod-network.6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.821 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"c520c01d-eab5-4613-a97d-e129d2838442", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"", Pod:"calico-apiserver-7bd7f85854-x7mgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31a4b8ecb55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.821 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.72/32] ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.821 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31a4b8ecb55 ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.833 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.834 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"c520c01d-eab5-4613-a97d-e129d2838442", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355", Pod:"calico-apiserver-7bd7f85854-x7mgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31a4b8ecb55", MAC:"fe:f6:68:12:e2:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:44.864643 containerd[1470]: 2025-09-13 00:28:44.848 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355" Namespace="calico-apiserver" Pod="calico-apiserver-7bd7f85854-x7mgg" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:44.897195 containerd[1470]: time="2025-09-13T00:28:44.896783368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:44.897195 containerd[1470]: time="2025-09-13T00:28:44.896857779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:44.897195 containerd[1470]: time="2025-09-13T00:28:44.896869301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:44.897195 containerd[1470]: time="2025-09-13T00:28:44.897014923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:44.932071 systemd[1]: Started cri-containerd-6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355.scope - libcontainer container 6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355. Sep 13 00:28:44.986114 containerd[1470]: time="2025-09-13T00:28:44.985969006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd7f85854-x7mgg,Uid:c520c01d-eab5-4613-a97d-e129d2838442,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355\"" Sep 13 00:28:45.200898 containerd[1470]: time="2025-09-13T00:28:45.200795224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:45.204325 containerd[1470]: time="2025-09-13T00:28:45.204248544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:28:45.207234 containerd[1470]: time="2025-09-13T00:28:45.207159543Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:45.213751 containerd[1470]: time="2025-09-13T00:28:45.213326151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:45.215629 containerd[1470]: time="2025-09-13T00:28:45.214625067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.415626978s" Sep 13 00:28:45.215629 containerd[1470]: time="2025-09-13T00:28:45.214705679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:28:45.219146 containerd[1470]: time="2025-09-13T00:28:45.218895470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:28:45.219446 containerd[1470]: time="2025-09-13T00:28:45.219410908Z" level=info msg="CreateContainer within sandbox \"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:28:45.226095 systemd-networkd[1372]: cali9db30a2c8b9: Gained IPv6LL Sep 13 00:28:45.250619 containerd[1470]: time="2025-09-13T00:28:45.250540075Z" level=info msg="CreateContainer within sandbox \"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d7acb47cca5c5bd6a7d3371aaaf82dea60fb3a0735e4f8a4b3e4e41d2ec7d40c\"" Sep 13 00:28:45.252142 containerd[1470]: time="2025-09-13T00:28:45.252092709Z" level=info msg="StartContainer for \"d7acb47cca5c5bd6a7d3371aaaf82dea60fb3a0735e4f8a4b3e4e41d2ec7d40c\"" Sep 13 00:28:45.305110 systemd[1]: Started cri-containerd-d7acb47cca5c5bd6a7d3371aaaf82dea60fb3a0735e4f8a4b3e4e41d2ec7d40c.scope - libcontainer container d7acb47cca5c5bd6a7d3371aaaf82dea60fb3a0735e4f8a4b3e4e41d2ec7d40c. Sep 13 00:28:45.347994 containerd[1470]: time="2025-09-13T00:28:45.347940423Z" level=info msg="StartContainer for \"d7acb47cca5c5bd6a7d3371aaaf82dea60fb3a0735e4f8a4b3e4e41d2ec7d40c\" returns successfully" Sep 13 00:28:46.507363 systemd-networkd[1372]: cali31a4b8ecb55: Gained IPv6LL Sep 13 00:28:46.915639 containerd[1470]: time="2025-09-13T00:28:46.915569404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:46.918327 containerd[1470]: time="2025-09-13T00:28:46.918243842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:28:46.919728 containerd[1470]: time="2025-09-13T00:28:46.919345686Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:46.922965 containerd[1470]: time="2025-09-13T00:28:46.922888173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:46.924808 containerd[1470]: time="2025-09-13T00:28:46.923649847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.704699128s" Sep 13 00:28:46.924808 containerd[1470]: time="2025-09-13T00:28:46.923746021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:28:46.926190 containerd[1470]: time="2025-09-13T00:28:46.926130496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:28:46.929788 containerd[1470]: time="2025-09-13T00:28:46.928933033Z" level=info msg="CreateContainer within sandbox \"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:28:46.963064 containerd[1470]: time="2025-09-13T00:28:46.962973179Z" level=info msg="CreateContainer within sandbox \"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9563e747955f90e18348743ff95a025f31fa8b6f054dc31bdb06311b71c82349\"" Sep 13 00:28:46.964858 containerd[1470]: time="2025-09-13T00:28:46.964056781Z" level=info msg="StartContainer for \"9563e747955f90e18348743ff95a025f31fa8b6f054dc31bdb06311b71c82349\"" Sep 13 00:28:47.019624 systemd[1]: Started cri-containerd-9563e747955f90e18348743ff95a025f31fa8b6f054dc31bdb06311b71c82349.scope - libcontainer container 9563e747955f90e18348743ff95a025f31fa8b6f054dc31bdb06311b71c82349. Sep 13 00:28:47.098853 containerd[1470]: time="2025-09-13T00:28:47.098499789Z" level=info msg="StartContainer for \"9563e747955f90e18348743ff95a025f31fa8b6f054dc31bdb06311b71c82349\" returns successfully" Sep 13 00:28:47.226179 kubelet[2655]: I0913 00:28:47.224712 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bd7f85854-msr2v" podStartSLOduration=29.23746972 podStartE2EDuration="33.224688562s" podCreationTimestamp="2025-09-13 00:28:14 +0000 UTC" firstStartedPulling="2025-09-13 00:28:41.229201255 +0000 UTC m=+46.012232483" lastFinishedPulling="2025-09-13 00:28:45.216419937 +0000 UTC m=+49.999451325" observedRunningTime="2025-09-13 00:28:45.8577776 +0000 UTC m=+50.640808868" watchObservedRunningTime="2025-09-13 00:28:47.224688562 +0000 UTC m=+52.007719870" Sep 13 00:28:47.498779 kubelet[2655]: I0913 00:28:47.498351 2655 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:28:47.498779 kubelet[2655]: I0913 00:28:47.498410 2655 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:28:47.706160 sshd[5012]: Connection closed by authenticating user root 107.175.39.180 port 33282 [preauth] Sep 13 00:28:47.712181 systemd[1]: sshd@37-195.201.238.219:22-107.175.39.180:33282.service: Deactivated successfully. Sep 13 00:28:47.905961 systemd[1]: Started sshd@38-195.201.238.219:22-107.175.39.180:56090.service - OpenSSH per-connection server daemon (107.175.39.180:56090). Sep 13 00:28:49.203718 containerd[1470]: time="2025-09-13T00:28:49.203643079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:49.205178 containerd[1470]: time="2025-09-13T00:28:49.205118131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:28:49.206126 containerd[1470]: time="2025-09-13T00:28:49.206000819Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:49.211408 containerd[1470]: time="2025-09-13T00:28:49.211313385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:49.213133 containerd[1470]: time="2025-09-13T00:28:49.212531240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.286346097s" Sep 13 00:28:49.213133 containerd[1470]: time="2025-09-13T00:28:49.212580007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:28:49.215974 containerd[1470]: time="2025-09-13T00:28:49.215545075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:28:49.247536 containerd[1470]: time="2025-09-13T00:28:49.247477119Z" level=info msg="CreateContainer within sandbox \"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:28:49.273898 containerd[1470]: time="2025-09-13T00:28:49.273776751Z" level=info msg="CreateContainer within sandbox \"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1e03261c5dd7dbfdabccb14877b39fbe331ed6f4ae806692b7f5231a1df4a443\"" Sep 13 00:28:49.274919 containerd[1470]: time="2025-09-13T00:28:49.274768734Z" level=info msg="StartContainer for \"1e03261c5dd7dbfdabccb14877b39fbe331ed6f4ae806692b7f5231a1df4a443\"" Sep 13 00:28:49.347125 systemd[1]: Started cri-containerd-1e03261c5dd7dbfdabccb14877b39fbe331ed6f4ae806692b7f5231a1df4a443.scope - libcontainer container 1e03261c5dd7dbfdabccb14877b39fbe331ed6f4ae806692b7f5231a1df4a443. Sep 13 00:28:49.398073 containerd[1470]: time="2025-09-13T00:28:49.397943774Z" level=info msg="StartContainer for \"1e03261c5dd7dbfdabccb14877b39fbe331ed6f4ae806692b7f5231a1df4a443\" returns successfully" Sep 13 00:28:49.884696 kubelet[2655]: I0913 00:28:49.884023 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9498d6585-s89kj" podStartSLOduration=23.717371744 podStartE2EDuration="29.884000658s" podCreationTimestamp="2025-09-13 00:28:20 +0000 UTC" firstStartedPulling="2025-09-13 00:28:43.04863148 +0000 UTC m=+47.831662748" lastFinishedPulling="2025-09-13 00:28:49.215260394 +0000 UTC m=+53.998291662" observedRunningTime="2025-09-13 00:28:49.882967949 +0000 UTC m=+54.665999217" watchObservedRunningTime="2025-09-13 00:28:49.884000658 +0000 UTC m=+54.667031926" Sep 13 00:28:49.888639 kubelet[2655]: I0913 00:28:49.887280 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvd2b" podStartSLOduration=24.105764796 podStartE2EDuration="29.887253047s" podCreationTimestamp="2025-09-13 00:28:20 +0000 UTC" firstStartedPulling="2025-09-13 00:28:41.143113978 +0000 UTC m=+45.926145246" lastFinishedPulling="2025-09-13 00:28:46.924602269 +0000 UTC m=+51.707633497" observedRunningTime="2025-09-13 00:28:47.874470438 +0000 UTC m=+52.657501706" watchObservedRunningTime="2025-09-13 00:28:49.887253047 +0000 UTC m=+54.670284315" Sep 13 00:28:50.907289 sshd[3071]: Connection reset by 104.248.235.219 port 6103 [preauth] Sep 13 00:28:50.909926 systemd[1]: sshd@30-195.201.238.219:22-104.248.235.219:6103.service: Deactivated successfully. Sep 13 00:28:51.200618 sshd[5190]: Connection closed by authenticating user root 107.175.39.180 port 56090 [preauth] Sep 13 00:28:51.203666 systemd[1]: sshd@38-195.201.238.219:22-107.175.39.180:56090.service: Deactivated successfully. Sep 13 00:28:51.426143 systemd[1]: Started sshd@39-195.201.238.219:22-107.175.39.180:56092.service - OpenSSH per-connection server daemon (107.175.39.180:56092). Sep 13 00:28:53.783307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394961484.mount: Deactivated successfully. Sep 13 00:28:54.362634 containerd[1470]: time="2025-09-13T00:28:54.362586585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:54.364310 containerd[1470]: time="2025-09-13T00:28:54.364152647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:28:54.369740 containerd[1470]: time="2025-09-13T00:28:54.369236838Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:54.371521 containerd[1470]: time="2025-09-13T00:28:54.370995563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:54.372306 containerd[1470]: time="2025-09-13T00:28:54.372087490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.156440121s" Sep 13 00:28:54.372306 containerd[1470]: time="2025-09-13T00:28:54.372250389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:28:54.374250 containerd[1470]: time="2025-09-13T00:28:54.373855375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:28:54.376382 containerd[1470]: time="2025-09-13T00:28:54.376149042Z" level=info msg="CreateContainer within sandbox \"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:28:54.404847 containerd[1470]: time="2025-09-13T00:28:54.404659158Z" level=info msg="CreateContainer within sandbox \"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723\"" Sep 13 00:28:54.409717 containerd[1470]: time="2025-09-13T00:28:54.407943419Z" level=info msg="StartContainer for \"5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723\"" Sep 13 00:28:54.451480 systemd[1]: run-containerd-runc-k8s.io-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723-runc.LTYqVu.mount: Deactivated successfully. Sep 13 00:28:54.462951 systemd[1]: Started cri-containerd-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723.scope - libcontainer container 5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723. Sep 13 00:28:54.509784 containerd[1470]: time="2025-09-13T00:28:54.509629085Z" level=info msg="StartContainer for \"5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723\" returns successfully" Sep 13 00:28:54.748708 sshd[5270]: Connection closed by authenticating user root 107.175.39.180 port 56092 [preauth] Sep 13 00:28:54.754624 systemd[1]: sshd@39-195.201.238.219:22-107.175.39.180:56092.service: Deactivated successfully. Sep 13 00:28:54.799604 containerd[1470]: time="2025-09-13T00:28:54.799487195Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:54.801128 containerd[1470]: time="2025-09-13T00:28:54.800989449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:28:54.805399 containerd[1470]: time="2025-09-13T00:28:54.805345596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 431.438855ms" Sep 13 00:28:54.805908 containerd[1470]: time="2025-09-13T00:28:54.805597505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:28:54.811263 containerd[1470]: time="2025-09-13T00:28:54.810297412Z" level=info msg="CreateContainer within sandbox \"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:28:54.837855 containerd[1470]: time="2025-09-13T00:28:54.837480653Z" level=info msg="CreateContainer within sandbox \"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8b4c8ae1db1e176e2990d7627ba8bf3d9cfda1e3fd5147df8e4de6f7e0295d0\"" Sep 13 00:28:54.839461 containerd[1470]: time="2025-09-13T00:28:54.839037914Z" level=info msg="StartContainer for \"e8b4c8ae1db1e176e2990d7627ba8bf3d9cfda1e3fd5147df8e4de6f7e0295d0\"" Sep 13 00:28:54.899880 systemd[1]: Started cri-containerd-e8b4c8ae1db1e176e2990d7627ba8bf3d9cfda1e3fd5147df8e4de6f7e0295d0.scope - libcontainer container e8b4c8ae1db1e176e2990d7627ba8bf3d9cfda1e3fd5147df8e4de6f7e0295d0. Sep 13 00:28:54.911537 systemd[1]: Started sshd@40-195.201.238.219:22-107.175.39.180:56106.service - OpenSSH per-connection server daemon (107.175.39.180:56106). Sep 13 00:28:54.936490 kubelet[2655]: I0913 00:28:54.936396 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-65946" podStartSLOduration=25.371789422 podStartE2EDuration="35.936375714s" podCreationTimestamp="2025-09-13 00:28:19 +0000 UTC" firstStartedPulling="2025-09-13 00:28:43.808973489 +0000 UTC m=+48.592004757" lastFinishedPulling="2025-09-13 00:28:54.373559781 +0000 UTC m=+59.156591049" observedRunningTime="2025-09-13 00:28:54.931259039 +0000 UTC m=+59.714290387" watchObservedRunningTime="2025-09-13 00:28:54.936375714 +0000 UTC m=+59.719406982" Sep 13 00:28:55.054718 containerd[1470]: time="2025-09-13T00:28:55.054422132Z" level=info msg="StartContainer for \"e8b4c8ae1db1e176e2990d7627ba8bf3d9cfda1e3fd5147df8e4de6f7e0295d0\" returns successfully" Sep 13 00:28:55.389696 containerd[1470]: time="2025-09-13T00:28:55.387767369Z" level=info msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.480 [WARNING][5406] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9a34000-b87c-4a54-a63c-fdd33a922040", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae", Pod:"coredns-7c65d6cfc9-g9w6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie729a0871ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.483 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.483 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" iface="eth0" netns="" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.483 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.484 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.536 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.536 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.536 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.550 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.551 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.554 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:55.559023 containerd[1470]: 2025-09-13 00:28:55.557 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.561198 containerd[1470]: time="2025-09-13T00:28:55.559055944Z" level=info msg="TearDown network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" successfully" Sep 13 00:28:55.561198 containerd[1470]: time="2025-09-13T00:28:55.559118659Z" level=info msg="StopPodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" returns successfully" Sep 13 00:28:55.562132 containerd[1470]: time="2025-09-13T00:28:55.561418647Z" level=info msg="RemovePodSandbox for \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" Sep 13 00:28:55.562132 containerd[1470]: time="2025-09-13T00:28:55.561471483Z" level=info msg="Forcibly stopping sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\"" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.636 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9a34000-b87c-4a54-a63c-fdd33a922040", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"821ced5bb4dfc154cdd4a95411af52071021acf8f1fdae6c951801f57507edae", Pod:"coredns-7c65d6cfc9-g9w6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie729a0871ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.636 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.637 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" iface="eth0" netns="" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.637 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.637 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.686 [INFO][5437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.686 [INFO][5437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.687 [INFO][5437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.703 [WARNING][5437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.704 [INFO][5437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" HandleID="k8s-pod-network.41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--g9w6r-eth0" Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.709 [INFO][5437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:55.722139 containerd[1470]: 2025-09-13 00:28:55.711 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d" Sep 13 00:28:55.722139 containerd[1470]: time="2025-09-13T00:28:55.721225639Z" level=info msg="TearDown network for sandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" successfully" Sep 13 00:28:55.735351 containerd[1470]: time="2025-09-13T00:28:55.735073765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:55.735351 containerd[1470]: time="2025-09-13T00:28:55.735174797Z" level=info msg="RemovePodSandbox \"41a2155beee76d5f8b9a2210fa40d9e971d9071a1fb619d181c23352c53fb00d\" returns successfully" Sep 13 00:28:55.736738 containerd[1470]: time="2025-09-13T00:28:55.736656527Z" level=info msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.802 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"c520c01d-eab5-4613-a97d-e129d2838442", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355", Pod:"calico-apiserver-7bd7f85854-x7mgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31a4b8ecb55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.803 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.803 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" iface="eth0" netns="" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.803 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.803 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.838 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.838 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.838 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.852 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.852 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.855 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:55.859790 containerd[1470]: 2025-09-13 00:28:55.857 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:55.859790 containerd[1470]: time="2025-09-13T00:28:55.859666745Z" level=info msg="TearDown network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" successfully" Sep 13 00:28:55.861119 containerd[1470]: time="2025-09-13T00:28:55.860789941Z" level=info msg="StopPodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" returns successfully" Sep 13 00:28:55.862552 containerd[1470]: time="2025-09-13T00:28:55.862502213Z" level=info msg="RemovePodSandbox for \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" Sep 13 00:28:55.862552 containerd[1470]: time="2025-09-13T00:28:55.862554489Z" level=info msg="Forcibly stopping sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\"" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:55.933 [WARNING][5473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"c520c01d-eab5-4613-a97d-e129d2838442", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"6190fa07fac28aae73e37924d4c488bfe2ad7c4a876a9994234a66a1f8787355", Pod:"calico-apiserver-7bd7f85854-x7mgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31a4b8ecb55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:55.933 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:55.933 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" iface="eth0" netns="" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:55.933 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:55.933 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.017 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.017 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.017 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.062 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.062 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" HandleID="k8s-pod-network.20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--x7mgg-eth0" Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.067 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.073549 containerd[1470]: 2025-09-13 00:28:56.070 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a" Sep 13 00:28:56.075985 containerd[1470]: time="2025-09-13T00:28:56.075823341Z" level=info msg="TearDown network for sandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" successfully" Sep 13 00:28:56.084579 containerd[1470]: time="2025-09-13T00:28:56.084360991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:56.084579 containerd[1470]: time="2025-09-13T00:28:56.084460664Z" level=info msg="RemovePodSandbox \"20163e91ebd93eb0758051284174c774d676c317a4335f064c189721a4d7b94a\" returns successfully" Sep 13 00:28:56.086389 containerd[1470]: time="2025-09-13T00:28:56.086089191Z" level=info msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.159 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"10ee540c-0324-4579-8d1e-34d8475f5cac", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23", Pod:"goldmane-7988f88666-65946", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9db30a2c8b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.160 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.160 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" iface="eth0" netns="" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.161 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.161 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.211 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.212 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.212 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.225 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.225 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.228 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.233656 containerd[1470]: 2025-09-13 00:28:56.231 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.236121 containerd[1470]: time="2025-09-13T00:28:56.233937045Z" level=info msg="TearDown network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" successfully" Sep 13 00:28:56.236121 containerd[1470]: time="2025-09-13T00:28:56.234895819Z" level=info msg="StopPodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" returns successfully" Sep 13 00:28:56.237156 containerd[1470]: time="2025-09-13T00:28:56.236798047Z" level=info msg="RemovePodSandbox for \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" Sep 13 00:28:56.237156 containerd[1470]: time="2025-09-13T00:28:56.236905440Z" level=info msg="Forcibly stopping sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\"" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.313 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"10ee540c-0324-4579-8d1e-34d8475f5cac", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"541e63001e404b0ac400582e9dccfe410059c2fd651fd2b6be4cbf5414422c23", Pod:"goldmane-7988f88666-65946", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9db30a2c8b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.314 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.314 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" iface="eth0" netns="" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.314 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.314 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.349 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.349 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.349 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.363 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.363 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" HandleID="k8s-pod-network.bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-goldmane--7988f88666--65946-eth0" Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.365 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.370130 containerd[1470]: 2025-09-13 00:28:56.367 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4" Sep 13 00:28:56.371336 containerd[1470]: time="2025-09-13T00:28:56.370477520Z" level=info msg="TearDown network for sandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" successfully" Sep 13 00:28:56.382353 containerd[1470]: time="2025-09-13T00:28:56.381982445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:56.382353 containerd[1470]: time="2025-09-13T00:28:56.382125275Z" level=info msg="RemovePodSandbox \"bfec05011d285c8d163b8986e2b3ac2d98e28cdddb9690eb71c97b4b199b67e4\" returns successfully" Sep 13 00:28:56.383245 containerd[1470]: time="2025-09-13T00:28:56.382666797Z" level=info msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.452 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef73e584-e419-4d03-b7f7-96ac3d16f498", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597", Pod:"calico-apiserver-7bd7f85854-msr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd25054ba33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.454 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.454 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" iface="eth0" netns="" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.454 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.454 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.498 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.499 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.499 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.515 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.515 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.522 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.531459 containerd[1470]: 2025-09-13 00:28:56.525 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.531459 containerd[1470]: time="2025-09-13T00:28:56.531303196Z" level=info msg="TearDown network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" successfully" Sep 13 00:28:56.531459 containerd[1470]: time="2025-09-13T00:28:56.531331314Z" level=info msg="StopPodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" returns successfully" Sep 13 00:28:56.535777 containerd[1470]: time="2025-09-13T00:28:56.534194996Z" level=info msg="RemovePodSandbox for \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" Sep 13 00:28:56.535777 containerd[1470]: time="2025-09-13T00:28:56.534241313Z" level=info msg="Forcibly stopping sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\"" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.603 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0", GenerateName:"calico-apiserver-7bd7f85854-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef73e584-e419-4d03-b7f7-96ac3d16f498", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd7f85854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"1b033bed7d861ea8f1d7158053d4f7db1c6595d81ae2ad14eea2edabb65b6597", Pod:"calico-apiserver-7bd7f85854-msr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd25054ba33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.603 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.603 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" iface="eth0" netns="" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.603 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.603 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.649 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.649 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.649 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.665 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.665 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" HandleID="k8s-pod-network.e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--apiserver--7bd7f85854--msr2v-eth0" Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.668 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.675308 containerd[1470]: 2025-09-13 00:28:56.672 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c" Sep 13 00:28:56.676781 containerd[1470]: time="2025-09-13T00:28:56.676189375Z" level=info msg="TearDown network for sandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" successfully" Sep 13 00:28:56.681734 containerd[1470]: time="2025-09-13T00:28:56.681467130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:56.681734 containerd[1470]: time="2025-09-13T00:28:56.681566163Z" level=info msg="RemovePodSandbox \"e2b575805a24e54b8c52c987b5726b427fb132bf30a2cce2f922cf01f05f152c\" returns successfully" Sep 13 00:28:56.683338 containerd[1470]: time="2025-09-13T00:28:56.682830195Z" level=info msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.740 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0", GenerateName:"calico-kube-controllers-9498d6585-", Namespace:"calico-system", SelfLink:"", UID:"3df007d3-21cb-4a21-97dd-dba0e3686044", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9498d6585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7", Pod:"calico-kube-controllers-9498d6585-s89kj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd6e3c36c71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.741 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.741 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" iface="eth0" netns="" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.741 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.741 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.782 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.782 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.782 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.800 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.800 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.804 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.809727 containerd[1470]: 2025-09-13 00:28:56.806 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.810942 containerd[1470]: time="2025-09-13T00:28:56.810414331Z" level=info msg="TearDown network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" successfully" Sep 13 00:28:56.810942 containerd[1470]: time="2025-09-13T00:28:56.810457768Z" level=info msg="StopPodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" returns successfully" Sep 13 00:28:56.812130 containerd[1470]: time="2025-09-13T00:28:56.811791155Z" level=info msg="RemovePodSandbox for \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" Sep 13 00:28:56.812130 containerd[1470]: time="2025-09-13T00:28:56.811884709Z" level=info msg="Forcibly stopping sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\"" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.869 [WARNING][5626] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0", GenerateName:"calico-kube-controllers-9498d6585-", Namespace:"calico-system", SelfLink:"", UID:"3df007d3-21cb-4a21-97dd-dba0e3686044", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9498d6585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"d805da64f9abf156e0ad6b1f587cede61d80eb7dec0de510131bb5ec4fa457d7", Pod:"calico-kube-controllers-9498d6585-s89kj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd6e3c36c71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.870 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.870 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" iface="eth0" netns="" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.870 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.870 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.915 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.915 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.915 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.928 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.929 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" HandleID="k8s-pod-network.bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-calico--kube--controllers--9498d6585--s89kj-eth0" Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.932 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:56.937404 containerd[1470]: 2025-09-13 00:28:56.934 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649" Sep 13 00:28:56.940096 containerd[1470]: time="2025-09-13T00:28:56.937234839Z" level=info msg="TearDown network for sandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" successfully" Sep 13 00:28:56.944397 containerd[1470]: time="2025-09-13T00:28:56.944142801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:56.944397 containerd[1470]: time="2025-09-13T00:28:56.944272112Z" level=info msg="RemovePodSandbox \"bfe0e467693aedf3fcc06620f51f23aaea1907aa7e75da9b6f57d7de07388649\" returns successfully" Sep 13 00:28:56.945656 containerd[1470]: time="2025-09-13T00:28:56.945332838Z" level=info msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.003 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a568f78d-444c-478c-ac80-6c89593927f2", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e", Pod:"coredns-7c65d6cfc9-2wk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1671035a74a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.005 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.005 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" iface="eth0" netns="" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.006 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.006 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.051 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.051 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.051 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.065 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.065 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.068 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.073527 containerd[1470]: 2025-09-13 00:28:57.071 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.073527 containerd[1470]: time="2025-09-13T00:28:57.073394846Z" level=info msg="TearDown network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" successfully" Sep 13 00:28:57.073527 containerd[1470]: time="2025-09-13T00:28:57.073422365Z" level=info msg="StopPodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" returns successfully" Sep 13 00:28:57.078263 containerd[1470]: time="2025-09-13T00:28:57.076875304Z" level=info msg="RemovePodSandbox for \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" Sep 13 00:28:57.078263 containerd[1470]: time="2025-09-13T00:28:57.077709251Z" level=info msg="Forcibly stopping sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\"" Sep 13 00:28:57.079707 kubelet[2655]: I0913 00:28:57.078467 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bd7f85854-x7mgg" podStartSLOduration=33.262997051 podStartE2EDuration="43.078446804s" podCreationTimestamp="2025-09-13 00:28:14 +0000 UTC" firstStartedPulling="2025-09-13 00:28:44.990966848 +0000 UTC m=+49.773998076" lastFinishedPulling="2025-09-13 00:28:54.806416561 +0000 UTC m=+59.589447829" observedRunningTime="2025-09-13 00:28:56.02452489 +0000 UTC m=+60.807556278" watchObservedRunningTime="2025-09-13 00:28:57.078446804 +0000 UTC m=+61.861478152" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.148 [WARNING][5670] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a568f78d-444c-478c-ac80-6c89593927f2", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"c9cef79f06b38a78915b3fcbae92be55884369cf73de4124ccd844abd631674e", Pod:"coredns-7c65d6cfc9-2wk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1671035a74a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.148 [INFO][5670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.148 [INFO][5670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" iface="eth0" netns="" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.148 [INFO][5670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.148 [INFO][5670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.175 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.175 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.176 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.187 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.187 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" HandleID="k8s-pod-network.15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-coredns--7c65d6cfc9--2wk28-eth0" Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.189 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.195607 containerd[1470]: 2025-09-13 00:28:57.191 [INFO][5670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f" Sep 13 00:28:57.195607 containerd[1470]: time="2025-09-13T00:28:57.193994184Z" level=info msg="TearDown network for sandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" successfully" Sep 13 00:28:57.198591 containerd[1470]: time="2025-09-13T00:28:57.198499737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:57.198990 containerd[1470]: time="2025-09-13T00:28:57.198959587Z" level=info msg="RemovePodSandbox \"15907000aac5543b6a0de40f4634e443e82ffbf83aa2a12a21bfae5c5275519f\" returns successfully" Sep 13 00:28:57.199862 containerd[1470]: time="2025-09-13T00:28:57.199602346Z" level=info msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.246 [WARNING][5694] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.246 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.246 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" iface="eth0" netns="" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.246 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.246 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.275 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.275 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.275 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.287 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.287 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.291 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.295234 containerd[1470]: 2025-09-13 00:28:57.293 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.296234 containerd[1470]: time="2025-09-13T00:28:57.295809962Z" level=info msg="TearDown network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" successfully" Sep 13 00:28:57.296234 containerd[1470]: time="2025-09-13T00:28:57.295850839Z" level=info msg="StopPodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" returns successfully" Sep 13 00:28:57.296958 containerd[1470]: time="2025-09-13T00:28:57.296588512Z" level=info msg="RemovePodSandbox for \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" Sep 13 00:28:57.296958 containerd[1470]: time="2025-09-13T00:28:57.296626590Z" level=info msg="Forcibly stopping sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\"" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.351 [WARNING][5715] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" WorkloadEndpoint="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.351 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.351 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" iface="eth0" netns="" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.351 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.351 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.375 [INFO][5722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.376 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.376 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.390 [WARNING][5722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.390 [INFO][5722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" HandleID="k8s-pod-network.346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-whisker--65757d5db6--hv4sh-eth0" Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.392 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.396256 containerd[1470]: 2025-09-13 00:28:57.394 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb" Sep 13 00:28:57.396256 containerd[1470]: time="2025-09-13T00:28:57.396136835Z" level=info msg="TearDown network for sandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" successfully" Sep 13 00:28:57.402508 containerd[1470]: time="2025-09-13T00:28:57.401977382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:57.402508 containerd[1470]: time="2025-09-13T00:28:57.402116373Z" level=info msg="RemovePodSandbox \"346b79559cabab9c6d1eaed27091788482a3c49218784ebd63c2d9193f286bcb\" returns successfully" Sep 13 00:28:57.402751 containerd[1470]: time="2025-09-13T00:28:57.402688816Z" level=info msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.462 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23", Pod:"csi-node-driver-mvd2b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee6e07a32e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.462 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.462 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" iface="eth0" netns="" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.462 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.463 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.487 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.488 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.488 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.519 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.519 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.523 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.528627 containerd[1470]: 2025-09-13 00:28:57.525 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.528627 containerd[1470]: time="2025-09-13T00:28:57.527161667Z" level=info msg="TearDown network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" successfully" Sep 13 00:28:57.528627 containerd[1470]: time="2025-09-13T00:28:57.527187625Z" level=info msg="StopPodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" returns successfully" Sep 13 00:28:57.528627 containerd[1470]: time="2025-09-13T00:28:57.528024852Z" level=info msg="RemovePodSandbox for \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" Sep 13 00:28:57.528627 containerd[1470]: time="2025-09-13T00:28:57.528068209Z" level=info msg="Forcibly stopping sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\"" Sep 13 00:28:57.577187 systemd[1]: run-containerd-runc-k8s.io-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723-runc.1fkfAc.mount: Deactivated successfully. Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.599 [WARNING][5775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b312368b-a0fb-41a9-8fcf-b787f4bcfe2e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-9bb66b8eb5", ContainerID:"b2d9165c93656353ed1adc12c7c72b55074c7c6c3183dd5a56818e444e802f23", Pod:"csi-node-driver-mvd2b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee6e07a32e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.600 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.600 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" iface="eth0" netns="" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.600 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.600 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.629 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.629 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.629 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.640 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.640 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" HandleID="k8s-pod-network.8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Workload="ci--4081--3--5--n--9bb66b8eb5-k8s-csi--node--driver--mvd2b-eth0" Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.644 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:57.649370 containerd[1470]: 2025-09-13 00:28:57.647 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2" Sep 13 00:28:57.650180 containerd[1470]: time="2025-09-13T00:28:57.649462936Z" level=info msg="TearDown network for sandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" successfully" Sep 13 00:28:57.660223 containerd[1470]: time="2025-09-13T00:28:57.659484456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:57.660223 containerd[1470]: time="2025-09-13T00:28:57.659852913Z" level=info msg="RemovePodSandbox \"8714dcaa9bde25e234a74c8e87beff8de80c7e2537f9d56d142e732b4de345f2\" returns successfully" Sep 13 00:28:59.348546 sshd[5342]: Connection closed by authenticating user root 107.175.39.180 port 56106 [preauth] Sep 13 00:28:59.352286 systemd[1]: sshd@40-195.201.238.219:22-107.175.39.180:56106.service: Deactivated successfully. Sep 13 00:28:59.500418 systemd[1]: Started sshd@41-195.201.238.219:22-107.175.39.180:52496.service - OpenSSH per-connection server daemon (107.175.39.180:52496). Sep 13 00:29:03.667315 sshd[5814]: Connection closed by authenticating user root 107.175.39.180 port 52496 [preauth] Sep 13 00:29:03.669743 systemd[1]: sshd@41-195.201.238.219:22-107.175.39.180:52496.service: Deactivated successfully. Sep 13 00:29:03.818057 systemd[1]: Started sshd@42-195.201.238.219:22-107.175.39.180:52500.service - OpenSSH per-connection server daemon (107.175.39.180:52500). Sep 13 00:29:06.037294 sshd[5848]: Connection closed by authenticating user root 107.175.39.180 port 52500 [preauth] Sep 13 00:29:06.043234 systemd[1]: sshd@42-195.201.238.219:22-107.175.39.180:52500.service: Deactivated successfully. Sep 13 00:29:06.406833 systemd[1]: Started sshd@43-195.201.238.219:22-107.175.39.180:56364.service - OpenSSH per-connection server daemon (107.175.39.180:56364). Sep 13 00:29:10.832720 sshd[5878]: Connection closed by authenticating user root 107.175.39.180 port 56364 [preauth] Sep 13 00:29:10.839338 systemd[1]: sshd@43-195.201.238.219:22-107.175.39.180:56364.service: Deactivated successfully. Sep 13 00:29:10.980803 systemd[1]: Started sshd@44-195.201.238.219:22-107.175.39.180:56366.service - OpenSSH per-connection server daemon (107.175.39.180:56366). Sep 13 00:29:14.547223 sshd[5883]: Connection closed by authenticating user root 107.175.39.180 port 56366 [preauth] Sep 13 00:29:14.552794 systemd[1]: sshd@44-195.201.238.219:22-107.175.39.180:56366.service: Deactivated successfully. Sep 13 00:29:14.748139 systemd[1]: Started sshd@45-195.201.238.219:22-107.175.39.180:56370.service - OpenSSH per-connection server daemon (107.175.39.180:56370). Sep 13 00:29:18.232226 sshd[5890]: Connection closed by authenticating user root 107.175.39.180 port 56370 [preauth] Sep 13 00:29:18.235787 systemd[1]: sshd@45-195.201.238.219:22-107.175.39.180:56370.service: Deactivated successfully. Sep 13 00:29:18.436818 systemd[1]: Started sshd@46-195.201.238.219:22-107.175.39.180:54944.service - OpenSSH per-connection server daemon (107.175.39.180:54944). Sep 13 00:29:23.426704 sshd[5901]: Connection closed by authenticating user root 107.175.39.180 port 54944 [preauth] Sep 13 00:29:23.430078 systemd[1]: sshd@46-195.201.238.219:22-107.175.39.180:54944.service: Deactivated successfully. Sep 13 00:29:23.598842 systemd[1]: Started sshd@47-195.201.238.219:22-107.175.39.180:54946.service - OpenSSH per-connection server daemon (107.175.39.180:54946). Sep 13 00:29:25.581187 sshd[5906]: Connection closed by authenticating user root 107.175.39.180 port 54946 [preauth] Sep 13 00:29:25.586846 systemd[1]: sshd@47-195.201.238.219:22-107.175.39.180:54946.service: Deactivated successfully. Sep 13 00:29:25.782873 systemd[1]: Started sshd@48-195.201.238.219:22-107.175.39.180:47540.service - OpenSSH per-connection server daemon (107.175.39.180:47540). Sep 13 00:29:27.570020 systemd[1]: run-containerd-runc-k8s.io-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723-runc.3rfqvf.mount: Deactivated successfully. Sep 13 00:29:29.587068 sshd[5913]: Connection closed by authenticating user root 107.175.39.180 port 47540 [preauth] Sep 13 00:29:29.592909 systemd[1]: sshd@48-195.201.238.219:22-107.175.39.180:47540.service: Deactivated successfully. Sep 13 00:29:29.742923 systemd[1]: Started sshd@49-195.201.238.219:22-107.175.39.180:47548.service - OpenSSH per-connection server daemon (107.175.39.180:47548). Sep 13 00:29:31.982318 sshd[5959]: Connection closed by authenticating user root 107.175.39.180 port 47548 [preauth] Sep 13 00:29:31.987593 systemd[1]: sshd@49-195.201.238.219:22-107.175.39.180:47548.service: Deactivated successfully. Sep 13 00:29:32.148865 systemd[1]: Started sshd@50-195.201.238.219:22-107.175.39.180:47552.service - OpenSSH per-connection server daemon (107.175.39.180:47552). Sep 13 00:29:32.236586 systemd[1]: run-containerd-runc-k8s.io-8c4218e75fe9e7782c02b43dce9de65062a5fd4e594a37961613bff0836b7eff-runc.DvjkUK.mount: Deactivated successfully. Sep 13 00:29:34.251154 sshd[5964]: Connection closed by authenticating user root 107.175.39.180 port 47552 [preauth] Sep 13 00:29:34.252780 systemd[1]: sshd@50-195.201.238.219:22-107.175.39.180:47552.service: Deactivated successfully. Sep 13 00:29:34.418093 systemd[1]: Started sshd@51-195.201.238.219:22-107.175.39.180:47568.service - OpenSSH per-connection server daemon (107.175.39.180:47568). Sep 13 00:29:35.831318 sshd[5992]: Connection closed by authenticating user root 107.175.39.180 port 47568 [preauth] Sep 13 00:29:35.836187 systemd[1]: sshd@51-195.201.238.219:22-107.175.39.180:47568.service: Deactivated successfully. Sep 13 00:29:35.990062 systemd[1]: Started sshd@52-195.201.238.219:22-107.175.39.180:36326.service - OpenSSH per-connection server daemon (107.175.39.180:36326). Sep 13 00:29:37.530703 sshd[5997]: Connection closed by authenticating user root 107.175.39.180 port 36326 [preauth] Sep 13 00:29:37.531424 systemd[1]: sshd@52-195.201.238.219:22-107.175.39.180:36326.service: Deactivated successfully. Sep 13 00:29:37.745858 systemd[1]: Started sshd@53-195.201.238.219:22-107.175.39.180:36328.service - OpenSSH per-connection server daemon (107.175.39.180:36328). Sep 13 00:29:41.362317 sshd[6002]: Connection closed by authenticating user root 107.175.39.180 port 36328 [preauth] Sep 13 00:29:41.367881 systemd[1]: sshd@53-195.201.238.219:22-107.175.39.180:36328.service: Deactivated successfully. Sep 13 00:29:41.628879 systemd[1]: Started sshd@54-195.201.238.219:22-107.175.39.180:36330.service - OpenSSH per-connection server daemon (107.175.39.180:36330). Sep 13 00:29:45.511907 sshd[6007]: Connection closed by authenticating user root 107.175.39.180 port 36330 [preauth] Sep 13 00:29:45.515306 systemd[1]: sshd@54-195.201.238.219:22-107.175.39.180:36330.service: Deactivated successfully. Sep 13 00:29:45.662277 systemd[1]: Started sshd@55-195.201.238.219:22-107.175.39.180:44450.service - OpenSSH per-connection server daemon (107.175.39.180:44450). Sep 13 00:29:48.626645 sshd[6012]: Connection closed by authenticating user root 107.175.39.180 port 44450 [preauth] Sep 13 00:29:48.632699 systemd[1]: sshd@55-195.201.238.219:22-107.175.39.180:44450.service: Deactivated successfully. Sep 13 00:29:48.903664 systemd[1]: Started sshd@56-195.201.238.219:22-107.175.39.180:44452.service - OpenSSH per-connection server daemon (107.175.39.180:44452). Sep 13 00:29:54.145147 sshd[6017]: Connection closed by authenticating user root 107.175.39.180 port 44452 [preauth] Sep 13 00:29:54.148852 systemd[1]: sshd@56-195.201.238.219:22-107.175.39.180:44452.service: Deactivated successfully. Sep 13 00:29:54.328528 systemd[1]: Started sshd@57-195.201.238.219:22-107.175.39.180:44462.service - OpenSSH per-connection server daemon (107.175.39.180:44462). Sep 13 00:29:56.360902 systemd[1]: Started sshd@58-195.201.238.219:22-147.75.109.163:44688.service - OpenSSH per-connection server daemon (147.75.109.163:44688). Sep 13 00:29:57.359445 sshd[6037]: Accepted publickey for core from 147.75.109.163 port 44688 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:29:57.361949 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:57.370199 systemd-logind[1449]: New session 8 of user core. Sep 13 00:29:57.383195 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:29:57.659718 sshd[6023]: Connection closed by authenticating user root 107.175.39.180 port 44462 [preauth] Sep 13 00:29:57.667271 systemd[1]: sshd@57-195.201.238.219:22-107.175.39.180:44462.service: Deactivated successfully. Sep 13 00:29:57.877157 systemd[1]: Started sshd@59-195.201.238.219:22-107.175.39.180:57928.service - OpenSSH per-connection server daemon (107.175.39.180:57928). Sep 13 00:29:58.168402 sshd[6037]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:58.184433 systemd[1]: sshd@58-195.201.238.219:22-147.75.109.163:44688.service: Deactivated successfully. Sep 13 00:29:58.188439 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:29:58.191133 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:29:58.194458 systemd-logind[1449]: Removed session 8. Sep 13 00:30:00.213292 sshd[6081]: Connection closed by authenticating user root 107.175.39.180 port 57928 [preauth] Sep 13 00:30:00.216120 systemd[1]: sshd@59-195.201.238.219:22-107.175.39.180:57928.service: Deactivated successfully. Sep 13 00:30:00.453006 systemd[1]: Started sshd@60-195.201.238.219:22-107.175.39.180:57944.service - OpenSSH per-connection server daemon (107.175.39.180:57944). Sep 13 00:30:03.345785 systemd[1]: Started sshd@61-195.201.238.219:22-147.75.109.163:36878.service - OpenSSH per-connection server daemon (147.75.109.163:36878). Sep 13 00:30:04.326903 sshd[6144]: Accepted publickey for core from 147.75.109.163 port 36878 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:04.330064 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:04.338503 systemd-logind[1449]: New session 9 of user core. Sep 13 00:30:04.342256 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:30:05.093082 sshd[6144]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:05.101788 systemd[1]: sshd@61-195.201.238.219:22-147.75.109.163:36878.service: Deactivated successfully. Sep 13 00:30:05.106572 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:30:05.109279 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:30:05.112218 systemd-logind[1449]: Removed session 9. Sep 13 00:30:05.772493 systemd[1]: run-containerd-runc-k8s.io-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723-runc.UCbWdJ.mount: Deactivated successfully. Sep 13 00:30:05.868728 sshd[6118]: Connection closed by authenticating user root 107.175.39.180 port 57944 [preauth] Sep 13 00:30:05.871912 systemd[1]: sshd@60-195.201.238.219:22-107.175.39.180:57944.service: Deactivated successfully. Sep 13 00:30:06.037154 systemd[1]: Started sshd@62-195.201.238.219:22-107.175.39.180:33526.service - OpenSSH per-connection server daemon (107.175.39.180:33526). Sep 13 00:30:10.187840 sshd[6183]: Connection closed by authenticating user root 107.175.39.180 port 33526 [preauth] Sep 13 00:30:10.191441 systemd[1]: sshd@62-195.201.238.219:22-107.175.39.180:33526.service: Deactivated successfully. Sep 13 00:30:10.273608 systemd[1]: Started sshd@63-195.201.238.219:22-147.75.109.163:40232.service - OpenSSH per-connection server daemon (147.75.109.163:40232). Sep 13 00:30:10.418105 systemd[1]: Started sshd@64-195.201.238.219:22-107.175.39.180:33540.service - OpenSSH per-connection server daemon (107.175.39.180:33540). Sep 13 00:30:11.257954 sshd[6202]: Accepted publickey for core from 147.75.109.163 port 40232 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:11.260818 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:11.268836 systemd-logind[1449]: New session 10 of user core. Sep 13 00:30:11.274910 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:30:12.042911 sshd[6202]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:12.050198 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:30:12.051255 systemd[1]: sshd@63-195.201.238.219:22-147.75.109.163:40232.service: Deactivated successfully. Sep 13 00:30:12.055774 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:30:12.058023 systemd-logind[1449]: Removed session 10. Sep 13 00:30:12.222162 systemd[1]: Started sshd@65-195.201.238.219:22-147.75.109.163:40234.service - OpenSSH per-connection server daemon (147.75.109.163:40234). Sep 13 00:30:13.218191 sshd[6227]: Accepted publickey for core from 147.75.109.163 port 40234 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:13.220351 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:13.227237 systemd-logind[1449]: New session 11 of user core. Sep 13 00:30:13.235587 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:30:14.029223 sshd[6227]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:14.036029 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:30:14.036865 systemd[1]: sshd@65-195.201.238.219:22-147.75.109.163:40234.service: Deactivated successfully. Sep 13 00:30:14.040939 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:30:14.042887 systemd-logind[1449]: Removed session 11. Sep 13 00:30:14.211275 systemd[1]: Started sshd@66-195.201.238.219:22-147.75.109.163:40236.service - OpenSSH per-connection server daemon (147.75.109.163:40236). Sep 13 00:30:14.471816 sshd[6205]: Connection closed by authenticating user root 107.175.39.180 port 33540 [preauth] Sep 13 00:30:14.475150 systemd[1]: sshd@64-195.201.238.219:22-107.175.39.180:33540.service: Deactivated successfully. Sep 13 00:30:14.624199 systemd[1]: Started sshd@67-195.201.238.219:22-107.175.39.180:33556.service - OpenSSH per-connection server daemon (107.175.39.180:33556). Sep 13 00:30:15.196033 sshd[6238]: Accepted publickey for core from 147.75.109.163 port 40236 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:15.199977 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:15.206387 systemd-logind[1449]: New session 12 of user core. Sep 13 00:30:15.213179 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:30:15.961457 sshd[6238]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:15.967579 systemd[1]: sshd@66-195.201.238.219:22-147.75.109.163:40236.service: Deactivated successfully. Sep 13 00:30:15.970642 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:30:15.972987 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:30:15.974826 systemd-logind[1449]: Removed session 12. Sep 13 00:30:18.239426 sshd[6243]: Connection closed by authenticating user root 107.175.39.180 port 33556 [preauth] Sep 13 00:30:18.242363 systemd[1]: sshd@67-195.201.238.219:22-107.175.39.180:33556.service: Deactivated successfully. Sep 13 00:30:18.436102 systemd[1]: Started sshd@68-195.201.238.219:22-107.175.39.180:59682.service - OpenSSH per-connection server daemon (107.175.39.180:59682). Sep 13 00:30:20.870734 sshd[6262]: Connection closed by authenticating user root 107.175.39.180 port 59682 [preauth] Sep 13 00:30:20.873472 systemd[1]: sshd@68-195.201.238.219:22-107.175.39.180:59682.service: Deactivated successfully. Sep 13 00:30:21.131699 systemd[1]: Started sshd@69-195.201.238.219:22-107.175.39.180:59690.service - OpenSSH per-connection server daemon (107.175.39.180:59690). Sep 13 00:30:21.150810 systemd[1]: Started sshd@70-195.201.238.219:22-147.75.109.163:47966.service - OpenSSH per-connection server daemon (147.75.109.163:47966). Sep 13 00:30:22.202373 sshd[6269]: Accepted publickey for core from 147.75.109.163 port 47966 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:22.204320 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:22.210649 systemd-logind[1449]: New session 13 of user core. Sep 13 00:30:22.216770 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:30:23.026946 sshd[6269]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:23.031789 systemd[1]: sshd@70-195.201.238.219:22-147.75.109.163:47966.service: Deactivated successfully. Sep 13 00:30:23.034967 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:30:23.037481 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:30:23.039753 systemd-logind[1449]: Removed session 13. Sep 13 00:30:26.058053 sshd[6267]: Connection closed by authenticating user root 107.175.39.180 port 59690 [preauth] Sep 13 00:30:26.061526 systemd[1]: sshd@69-195.201.238.219:22-107.175.39.180:59690.service: Deactivated successfully. Sep 13 00:30:26.284097 systemd[1]: Started sshd@71-195.201.238.219:22-107.175.39.180:40704.service - OpenSSH per-connection server daemon (107.175.39.180:40704). Sep 13 00:30:28.203293 systemd[1]: Started sshd@72-195.201.238.219:22-147.75.109.163:47976.service - OpenSSH per-connection server daemon (147.75.109.163:47976). Sep 13 00:30:29.181477 sshd[6326]: Accepted publickey for core from 147.75.109.163 port 47976 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:29.182505 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:29.191948 systemd-logind[1449]: New session 14 of user core. Sep 13 00:30:29.200000 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:30:29.936278 sshd[6285]: Connection closed by authenticating user root 107.175.39.180 port 40704 [preauth] Sep 13 00:30:29.941420 systemd[1]: sshd@71-195.201.238.219:22-107.175.39.180:40704.service: Deactivated successfully. Sep 13 00:30:29.967589 sshd[6326]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:29.976504 systemd[1]: sshd@72-195.201.238.219:22-147.75.109.163:47976.service: Deactivated successfully. Sep 13 00:30:29.983909 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:30:29.985755 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:30:29.987496 systemd-logind[1449]: Removed session 14. Sep 13 00:30:30.129902 systemd[1]: Started sshd@73-195.201.238.219:22-107.175.39.180:40720.service - OpenSSH per-connection server daemon (107.175.39.180:40720). Sep 13 00:30:34.049983 sshd[6341]: Connection closed by authenticating user root 107.175.39.180 port 40720 [preauth] Sep 13 00:30:34.055344 systemd[1]: sshd@73-195.201.238.219:22-107.175.39.180:40720.service: Deactivated successfully. Sep 13 00:30:34.278647 systemd[1]: Started sshd@74-195.201.238.219:22-107.175.39.180:40726.service - OpenSSH per-connection server daemon (107.175.39.180:40726). Sep 13 00:30:35.151246 systemd[1]: Started sshd@75-195.201.238.219:22-147.75.109.163:59188.service - OpenSSH per-connection server daemon (147.75.109.163:59188). Sep 13 00:30:36.142300 sshd[6372]: Accepted publickey for core from 147.75.109.163 port 59188 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:36.144925 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:36.151540 systemd-logind[1449]: New session 15 of user core. Sep 13 00:30:36.153904 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:30:36.931975 sshd[6372]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:36.937566 systemd[1]: sshd@75-195.201.238.219:22-147.75.109.163:59188.service: Deactivated successfully. Sep 13 00:30:36.941086 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:30:36.942510 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:30:36.943658 systemd-logind[1449]: Removed session 15. Sep 13 00:30:37.116250 systemd[1]: Started sshd@76-195.201.238.219:22-147.75.109.163:59200.service - OpenSSH per-connection server daemon (147.75.109.163:59200). Sep 13 00:30:37.853949 sshd[6370]: Connection closed by authenticating user root 107.175.39.180 port 40726 [preauth] Sep 13 00:30:37.856464 systemd[1]: sshd@74-195.201.238.219:22-107.175.39.180:40726.service: Deactivated successfully. Sep 13 00:30:38.104890 sshd[6386]: Accepted publickey for core from 147.75.109.163 port 59200 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:38.106212 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:38.113153 systemd-logind[1449]: New session 16 of user core. Sep 13 00:30:38.118241 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:30:38.131243 systemd[1]: Started sshd@77-195.201.238.219:22-107.175.39.180:40286.service - OpenSSH per-connection server daemon (107.175.39.180:40286). Sep 13 00:30:39.050014 sshd[6386]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:39.054610 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:30:39.055179 systemd[1]: sshd@76-195.201.238.219:22-147.75.109.163:59200.service: Deactivated successfully. Sep 13 00:30:39.059122 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:30:39.061719 systemd-logind[1449]: Removed session 16. Sep 13 00:30:39.231157 systemd[1]: Started sshd@78-195.201.238.219:22-147.75.109.163:59214.service - OpenSSH per-connection server daemon (147.75.109.163:59214). Sep 13 00:30:40.242058 sshd[6402]: Accepted publickey for core from 147.75.109.163 port 59214 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:40.244920 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:40.253334 systemd-logind[1449]: New session 17 of user core. Sep 13 00:30:40.257031 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:30:42.365778 sshd[6391]: Connection closed by authenticating user root 107.175.39.180 port 40286 [preauth] Sep 13 00:30:42.384077 systemd[1]: sshd@77-195.201.238.219:22-107.175.39.180:40286.service: Deactivated successfully. Sep 13 00:30:42.593255 systemd[1]: Started sshd@79-195.201.238.219:22-107.175.39.180:40300.service - OpenSSH per-connection server daemon (107.175.39.180:40300). Sep 13 00:30:43.043231 sshd[6402]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:43.051943 systemd[1]: sshd@78-195.201.238.219:22-147.75.109.163:59214.service: Deactivated successfully. Sep 13 00:30:43.056454 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:30:43.058648 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:30:43.060856 systemd-logind[1449]: Removed session 17. Sep 13 00:30:43.215300 systemd[1]: Started sshd@80-195.201.238.219:22-147.75.109.163:33938.service - OpenSSH per-connection server daemon (147.75.109.163:33938). Sep 13 00:30:44.209051 sshd[6427]: Accepted publickey for core from 147.75.109.163 port 33938 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:44.211581 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:44.220506 systemd-logind[1449]: New session 18 of user core. Sep 13 00:30:44.228975 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:30:45.138746 sshd[6427]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:45.146786 systemd[1]: sshd@80-195.201.238.219:22-147.75.109.163:33938.service: Deactivated successfully. Sep 13 00:30:45.151834 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:30:45.153604 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:30:45.154912 systemd-logind[1449]: Removed session 18. Sep 13 00:30:45.316206 systemd[1]: Started sshd@81-195.201.238.219:22-147.75.109.163:33940.service - OpenSSH per-connection server daemon (147.75.109.163:33940). Sep 13 00:30:46.316155 sshd[6439]: Accepted publickey for core from 147.75.109.163 port 33940 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:46.319166 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:46.328001 systemd-logind[1449]: New session 19 of user core. Sep 13 00:30:46.333103 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:30:46.537625 sshd[6418]: Connection closed by authenticating user root 107.175.39.180 port 40300 [preauth] Sep 13 00:30:46.541546 systemd[1]: sshd@79-195.201.238.219:22-107.175.39.180:40300.service: Deactivated successfully. Sep 13 00:30:46.768040 systemd[1]: Started sshd@82-195.201.238.219:22-107.175.39.180:42128.service - OpenSSH per-connection server daemon (107.175.39.180:42128). Sep 13 00:30:47.080990 sshd[6439]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:47.084890 systemd[1]: sshd@81-195.201.238.219:22-147.75.109.163:33940.service: Deactivated successfully. Sep 13 00:30:47.088230 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:30:47.090450 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:30:47.092583 systemd-logind[1449]: Removed session 19. Sep 13 00:30:50.562609 sshd[6445]: Connection closed by authenticating user root 107.175.39.180 port 42128 [preauth] Sep 13 00:30:50.566825 systemd[1]: sshd@82-195.201.238.219:22-107.175.39.180:42128.service: Deactivated successfully. Sep 13 00:30:50.768097 systemd[1]: Started sshd@83-195.201.238.219:22-107.175.39.180:42132.service - OpenSSH per-connection server daemon (107.175.39.180:42132). Sep 13 00:30:52.259080 systemd[1]: Started sshd@84-195.201.238.219:22-147.75.109.163:59568.service - OpenSSH per-connection server daemon (147.75.109.163:59568). Sep 13 00:30:53.185316 kubelet[2655]: I0913 00:30:53.185258 2655 ???:1] "http: TLS handshake error from 167.94.145.97:50232: read tcp 195.201.238.219:10250->167.94.145.97:50232: read: connection reset by peer" Sep 13 00:30:53.238301 sshd[6465]: Accepted publickey for core from 147.75.109.163 port 59568 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:53.241276 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:53.253161 systemd-logind[1449]: New session 20 of user core. Sep 13 00:30:53.259975 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:30:53.262668 sshd[6462]: Connection closed by authenticating user root 107.175.39.180 port 42132 [preauth] Sep 13 00:30:53.260412 systemd[1]: sshd@83-195.201.238.219:22-107.175.39.180:42132.service: Deactivated successfully. Sep 13 00:30:54.002569 sshd[6465]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:54.008140 systemd[1]: sshd@84-195.201.238.219:22-147.75.109.163:59568.service: Deactivated successfully. Sep 13 00:30:54.011198 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:30:54.014596 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:30:54.015910 systemd-logind[1449]: Removed session 20. Sep 13 00:30:54.104897 systemd[1]: Started sshd@85-195.201.238.219:22-107.175.39.180:42140.service - OpenSSH per-connection server daemon (107.175.39.180:42140). Sep 13 00:30:57.431944 sshd[6479]: Connection closed by authenticating user root 107.175.39.180 port 42140 [preauth] Sep 13 00:30:57.436311 systemd[1]: sshd@85-195.201.238.219:22-107.175.39.180:42140.service: Deactivated successfully. Sep 13 00:30:57.564927 systemd[1]: run-containerd-runc-k8s.io-5bbc5e1d0baa4661d670e785fffb989e9db363e341795847b44e8a1f8cd30723-runc.vjYEWK.mount: Deactivated successfully. Sep 13 00:30:57.590447 systemd[1]: Started sshd@86-195.201.238.219:22-107.175.39.180:50768.service - OpenSSH per-connection server daemon (107.175.39.180:50768). Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.812987 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.813104 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.813485 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.814200 1451 omaha_request_params.cc:62] Current group set to lts Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.814349 1451 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.814366 1451 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:30:57.815528 update_engine[1451]: I20250913 00:30:57.814394 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:30:57.818177 update_engine[1451]: I20250913 00:30:57.818127 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:30:57.818736 update_engine[1451]: I20250913 00:30:57.818418 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:30:57.818736 update_engine[1451]: I20250913 00:30:57.818437 1451 omaha_request_action.cc:272] Request: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: Sep 13 00:30:57.818736 update_engine[1451]: I20250913 00:30:57.818444 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:30:57.823932 update_engine[1451]: I20250913 00:30:57.823897 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:30:57.824699 update_engine[1451]: I20250913 00:30:57.824609 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:30:57.826940 update_engine[1451]: E20250913 00:30:57.825822 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:30:57.826940 update_engine[1451]: I20250913 00:30:57.825935 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:30:57.833970 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:30:59.174233 systemd[1]: Started sshd@87-195.201.238.219:22-147.75.109.163:59582.service - OpenSSH per-connection server daemon (147.75.109.163:59582). Sep 13 00:31:00.146761 sshd[6548]: Accepted publickey for core from 147.75.109.163 port 59582 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:00.149544 sshd[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:00.156346 systemd-logind[1449]: New session 21 of user core. Sep 13 00:31:00.160179 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:31:00.904624 sshd[6548]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:00.910669 systemd[1]: sshd@87-195.201.238.219:22-147.75.109.163:59582.service: Deactivated successfully. Sep 13 00:31:00.913666 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:31:00.917560 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:31:00.919126 systemd-logind[1449]: Removed session 21. Sep 13 00:31:01.894176 sshd[6522]: Connection closed by authenticating user root 107.175.39.180 port 50768 [preauth] Sep 13 00:31:01.899403 systemd[1]: sshd@86-195.201.238.219:22-107.175.39.180:50768.service: Deactivated successfully. Sep 13 00:31:02.064116 systemd[1]: Started sshd@88-195.201.238.219:22-107.175.39.180:50784.service - OpenSSH per-connection server daemon (107.175.39.180:50784). Sep 13 00:31:04.225566 sshd[6563]: Connection closed by authenticating user root 107.175.39.180 port 50784 [preauth] Sep 13 00:31:04.229024 systemd[1]: sshd@88-195.201.238.219:22-107.175.39.180:50784.service: Deactivated successfully. Sep 13 00:31:04.433663 systemd[1]: Started sshd@89-195.201.238.219:22-107.175.39.180:50790.service - OpenSSH per-connection server daemon (107.175.39.180:50790). Sep 13 00:31:06.201746 sshd[6590]: Connection closed by authenticating user root 107.175.39.180 port 50790 [preauth] Sep 13 00:31:06.204564 systemd[1]: sshd@89-195.201.238.219:22-107.175.39.180:50790.service: Deactivated successfully. Sep 13 00:31:06.345076 systemd[1]: Started sshd@90-195.201.238.219:22-107.175.39.180:38826.service - OpenSSH per-connection server daemon (107.175.39.180:38826). Sep 13 00:31:07.813945 update_engine[1451]: I20250913 00:31:07.813741 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:31:07.814360 update_engine[1451]: I20250913 00:31:07.814033 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:31:07.814360 update_engine[1451]: I20250913 00:31:07.814282 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:31:07.815580 update_engine[1451]: E20250913 00:31:07.815491 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:31:07.815580 update_engine[1451]: I20250913 00:31:07.815589 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:31:07.886010 sshd[6615]: Connection closed by authenticating user root 107.175.39.180 port 38826 [preauth] Sep 13 00:31:07.889203 systemd[1]: sshd@90-195.201.238.219:22-107.175.39.180:38826.service: Deactivated successfully. Sep 13 00:31:08.053179 systemd[1]: Started sshd@91-195.201.238.219:22-107.175.39.180:38836.service - OpenSSH per-connection server daemon (107.175.39.180:38836). Sep 13 00:31:09.333645 kubelet[2655]: I0913 00:31:09.333570 2655 ???:1] "http: TLS handshake error from 167.94.145.97:57694: tls: client offered only unsupported versions: [302 301]" Sep 13 00:31:09.467721 sshd[6620]: Connection closed by authenticating user root 107.175.39.180 port 38836 [preauth] Sep 13 00:31:09.471975 systemd[1]: sshd@91-195.201.238.219:22-107.175.39.180:38836.service: Deactivated successfully. Sep 13 00:31:09.606074 systemd[1]: Started sshd@92-195.201.238.219:22-107.175.39.180:38850.service - OpenSSH per-connection server daemon (107.175.39.180:38850). Sep 13 00:31:11.176874 sshd[6625]: Connection closed by authenticating user root 107.175.39.180 port 38850 [preauth] Sep 13 00:31:11.175443 systemd[1]: sshd@92-195.201.238.219:22-107.175.39.180:38850.service: Deactivated successfully. Sep 13 00:31:11.331916 systemd[1]: Started sshd@93-195.201.238.219:22-107.175.39.180:38858.service - OpenSSH per-connection server daemon (107.175.39.180:38858). Sep 13 00:31:13.964668 sshd[6630]: Connection closed by authenticating user root 107.175.39.180 port 38858 [preauth] Sep 13 00:31:13.966757 systemd[1]: sshd@93-195.201.238.219:22-107.175.39.180:38858.service: Deactivated successfully. Sep 13 00:31:14.167266 systemd[1]: Started sshd@94-195.201.238.219:22-107.175.39.180:38866.service - OpenSSH per-connection server daemon (107.175.39.180:38866). Sep 13 00:31:14.523367 kubelet[2655]: I0913 00:31:14.523188 2655 ???:1] "http: TLS handshake error from 167.94.145.97:57732: tls: client offered only unsupported versions: [301]" Sep 13 00:31:15.949139 kubelet[2655]: E0913 00:31:15.948962 2655 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53916->10.0.0.2:2379: read: connection timed out" Sep 13 00:31:16.081786 kubelet[2655]: I0913 00:31:16.081740 2655 ???:1] "http: TLS handshake error from 167.94.145.97:48486: tls: client offered only unsupported versions: []" Sep 13 00:31:16.640256 sshd[6635]: Connection closed by authenticating user root 107.175.39.180 port 38866 [preauth] Sep 13 00:31:16.643808 systemd[1]: sshd@94-195.201.238.219:22-107.175.39.180:38866.service: Deactivated successfully. Sep 13 00:31:16.686977 systemd[1]: cri-containerd-d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c.scope: Deactivated successfully. Sep 13 00:31:16.688799 systemd[1]: cri-containerd-d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c.scope: Consumed 22.851s CPU time. Sep 13 00:31:16.717447 containerd[1470]: time="2025-09-13T00:31:16.717345486Z" level=info msg="shim disconnected" id=d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c namespace=k8s.io Sep 13 00:31:16.717447 containerd[1470]: time="2025-09-13T00:31:16.717442799Z" level=warning msg="cleaning up after shim disconnected" id=d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c namespace=k8s.io Sep 13 00:31:16.717978 containerd[1470]: time="2025-09-13T00:31:16.717452558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:31:16.719976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c-rootfs.mount: Deactivated successfully. Sep 13 00:31:16.781327 systemd[1]: Started sshd@95-195.201.238.219:22-107.175.39.180:41572.service - OpenSSH per-connection server daemon (107.175.39.180:41572). Sep 13 00:31:17.495964 kubelet[2655]: I0913 00:31:17.495931 2655 scope.go:117] "RemoveContainer" containerID="d9f7a9628ea373aea5abc09ccd03243d4304a8c013187eb9a1073bfd5ae9372c" Sep 13 00:31:17.499222 containerd[1470]: time="2025-09-13T00:31:17.498600118Z" level=info msg="CreateContainer within sandbox \"85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:31:17.535837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771723196.mount: Deactivated successfully. Sep 13 00:31:17.538057 systemd[1]: cri-containerd-1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6.scope: Deactivated successfully. Sep 13 00:31:17.538392 systemd[1]: cri-containerd-1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6.scope: Consumed 5.562s CPU time, 20.0M memory peak, 0B memory swap peak. Sep 13 00:31:17.546057 containerd[1470]: time="2025-09-13T00:31:17.544499670Z" level=info msg="CreateContainer within sandbox \"85a1e3c779c20809cb306b9af9b26c49bde9119defbcd08ab3d34ff9ea1a2647\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0bbf91541528552932ccd571fe17caf86bcaca6ebcf23e85af5e2f68dd54cd11\"" Sep 13 00:31:17.548008 containerd[1470]: time="2025-09-13T00:31:17.547792900Z" level=info msg="StartContainer for \"0bbf91541528552932ccd571fe17caf86bcaca6ebcf23e85af5e2f68dd54cd11\"" Sep 13 00:31:17.579880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6-rootfs.mount: Deactivated successfully. Sep 13 00:31:17.582341 containerd[1470]: time="2025-09-13T00:31:17.582274420Z" level=info msg="shim disconnected" id=1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6 namespace=k8s.io Sep 13 00:31:17.582524 containerd[1470]: time="2025-09-13T00:31:17.582506725Z" level=warning msg="cleaning up after shim disconnected" id=1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6 namespace=k8s.io Sep 13 00:31:17.582603 containerd[1470]: time="2025-09-13T00:31:17.582589320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:31:17.597155 systemd[1]: Started cri-containerd-0bbf91541528552932ccd571fe17caf86bcaca6ebcf23e85af5e2f68dd54cd11.scope - libcontainer container 0bbf91541528552932ccd571fe17caf86bcaca6ebcf23e85af5e2f68dd54cd11. Sep 13 00:31:17.638564 containerd[1470]: time="2025-09-13T00:31:17.637970426Z" level=info msg="StartContainer for \"0bbf91541528552932ccd571fe17caf86bcaca6ebcf23e85af5e2f68dd54cd11\" returns successfully" Sep 13 00:31:17.678198 sshd[6671]: Connection closed by authenticating user root 107.175.39.180 port 41572 [preauth] Sep 13 00:31:17.684092 systemd[1]: sshd@95-195.201.238.219:22-107.175.39.180:41572.service: Deactivated successfully. Sep 13 00:31:17.810166 systemd[1]: Started sshd@96-195.201.238.219:22-107.175.39.180:41576.service - OpenSSH per-connection server daemon (107.175.39.180:41576). Sep 13 00:31:17.812369 update_engine[1451]: I20250913 00:31:17.810716 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:31:17.812369 update_engine[1451]: I20250913 00:31:17.810953 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:31:17.812369 update_engine[1451]: I20250913 00:31:17.811259 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:31:17.813536 update_engine[1451]: E20250913 00:31:17.813408 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:31:17.813536 update_engine[1451]: I20250913 00:31:17.813501 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:31:18.504261 kubelet[2655]: I0913 00:31:18.504176 2655 scope.go:117] "RemoveContainer" containerID="1c84612a32ae63ebb32be9e32cc70650255a1b3ab2e65f1cab4566dde898b4e6" Sep 13 00:31:18.507202 containerd[1470]: time="2025-09-13T00:31:18.507136936Z" level=info msg="CreateContainer within sandbox \"f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:31:18.543727 containerd[1470]: time="2025-09-13T00:31:18.543624569Z" level=info msg="CreateContainer within sandbox \"f2bff3ecd07e18ee29f253ef2f91aeed684f26b6fb9cb2c916027e495068f84e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"572c9893e4164bf9769f2a2f2336fed53e608843991749b4d1da77161989f87c\"" Sep 13 00:31:18.544397 containerd[1470]: time="2025-09-13T00:31:18.544345045Z" level=info msg="StartContainer for \"572c9893e4164bf9769f2a2f2336fed53e608843991749b4d1da77161989f87c\"" Sep 13 00:31:18.583001 systemd[1]: Started cri-containerd-572c9893e4164bf9769f2a2f2336fed53e608843991749b4d1da77161989f87c.scope - libcontainer container 572c9893e4164bf9769f2a2f2336fed53e608843991749b4d1da77161989f87c. Sep 13 00:31:18.645051 containerd[1470]: time="2025-09-13T00:31:18.644981367Z" level=info msg="StartContainer for \"572c9893e4164bf9769f2a2f2336fed53e608843991749b4d1da77161989f87c\" returns successfully" Sep 13 00:31:18.688492 sshd[6736]: Connection closed by authenticating user root 107.175.39.180 port 41576 [preauth] Sep 13 00:31:18.690773 systemd[1]: sshd@96-195.201.238.219:22-107.175.39.180:41576.service: Deactivated successfully. Sep 13 00:31:18.902070 systemd[1]: Started sshd@97-195.201.238.219:22-107.175.39.180:41578.service - OpenSSH per-connection server daemon (107.175.39.180:41578). Sep 13 00:31:20.944975 kubelet[2655]: E0913 00:31:20.944590 2655 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53742->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-9bb66b8eb5.1864b02f01512c40 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-9bb66b8eb5,UID:c5a75d69944e6a4cf13470709ffc2a68,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-9bb66b8eb5,},FirstTimestamp:2025-09-13 00:31:10.513384512 +0000 UTC m=+195.296415740,LastTimestamp:2025-09-13 00:31:10.513384512 +0000 UTC m=+195.296415740,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-9bb66b8eb5,}" Sep 13 00:31:21.938631 systemd[1]: cri-containerd-47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba.scope: Deactivated successfully. Sep 13 00:31:21.939636 systemd[1]: cri-containerd-47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba.scope: Consumed 2.911s CPU time, 16.3M memory peak, 0B memory swap peak. Sep 13 00:31:21.966385 sshd[6775]: Connection closed by authenticating user root 107.175.39.180 port 41578 [preauth] Sep 13 00:31:21.970011 systemd[1]: sshd@97-195.201.238.219:22-107.175.39.180:41578.service: Deactivated successfully. Sep 13 00:31:21.983070 containerd[1470]: time="2025-09-13T00:31:21.983000404Z" level=info msg="shim disconnected" id=47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba namespace=k8s.io Sep 13 00:31:21.983070 containerd[1470]: time="2025-09-13T00:31:21.983063680Z" level=warning msg="cleaning up after shim disconnected" id=47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba namespace=k8s.io Sep 13 00:31:21.984083 containerd[1470]: time="2025-09-13T00:31:21.983085279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:31:21.984129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47d0332cbced54df5f9838908391c2ce05c5020f4e041465361028234f5679ba-rootfs.mount: Deactivated successfully. Sep 13 00:31:22.110223 systemd[1]: Started sshd@98-195.201.238.219:22-107.175.39.180:41580.service - OpenSSH per-connection server daemon (107.175.39.180:41580).