Sep 12 23:56:46.888721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:56:46.888755 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:56:46.888768 kernel: KASLR enabled Sep 12 23:56:46.888774 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 12 23:56:46.888780 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 12 23:56:46.888785 kernel: random: crng init done Sep 12 23:56:46.888793 kernel: ACPI: Early table checksum verification disabled Sep 12 23:56:46.888798 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 12 23:56:46.888805 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:56:46.888812 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888818 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888824 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888830 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888836 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888844 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888852 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888859 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888865 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:46.888871 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 23:56:46.888878 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 12 23:56:46.888884 kernel: NUMA: Failed to initialise from firmware Sep 12 23:56:46.888891 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 23:56:46.888897 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 12 23:56:46.888903 kernel: Zone ranges: Sep 12 23:56:46.888910 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:56:46.888918 kernel: DMA32 empty Sep 12 23:56:46.888924 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 12 23:56:46.888931 kernel: Movable zone start for each node Sep 12 23:56:46.888937 kernel: Early memory node ranges Sep 12 23:56:46.888943 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 12 23:56:46.888950 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 12 23:56:46.888956 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 12 23:56:46.888963 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 12 23:56:46.888969 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 12 23:56:46.888975 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 12 23:56:46.888982 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 12 23:56:46.888988 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 23:56:46.888996 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 12 23:56:46.889002 kernel: psci: probing for conduit method from ACPI. Sep 12 23:56:46.889009 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:56:46.889019 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:56:46.889026 kernel: psci: Trusted OS migration not required Sep 12 23:56:46.889076 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:56:46.889088 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:56:46.889095 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:56:46.889102 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:56:46.889109 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:56:46.889116 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:56:46.889122 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:56:46.889129 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:56:46.889136 kernel: CPU features: detected: Spectre-v4 Sep 12 23:56:46.889143 kernel: CPU features: detected: Spectre-BHB Sep 12 23:56:46.889150 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:56:46.889158 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:56:46.889165 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:56:46.889172 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:56:46.889179 kernel: alternatives: applying boot alternatives Sep 12 23:56:46.889187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:56:46.889194 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:56:46.889201 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:56:46.889208 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:56:46.889215 kernel: Fallback order for Node 0: 0 Sep 12 23:56:46.889222 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 12 23:56:46.889229 kernel: Policy zone: Normal Sep 12 23:56:46.889237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:56:46.889244 kernel: software IO TLB: area num 2. Sep 12 23:56:46.889251 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 12 23:56:46.889259 kernel: Memory: 3882744K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213256K reserved, 0K cma-reserved) Sep 12 23:56:46.889266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:56:46.889273 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:56:46.889281 kernel: rcu: RCU event tracing is enabled. Sep 12 23:56:46.889288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:56:46.889295 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:56:46.889302 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:56:46.889308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:56:46.889317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:56:46.889324 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:56:46.889331 kernel: GICv3: 256 SPIs implemented Sep 12 23:56:46.889350 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:56:46.889358 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:56:46.889366 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:56:46.889373 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:56:46.889380 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:56:46.889387 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:56:46.889394 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:56:46.889401 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 12 23:56:46.889408 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 12 23:56:46.889417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:56:46.889424 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:56:46.889431 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:56:46.889438 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:56:46.889445 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:56:46.889452 kernel: Console: colour dummy device 80x25 Sep 12 23:56:46.889459 kernel: ACPI: Core revision 20230628 Sep 12 23:56:46.889467 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:56:46.889474 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:56:46.889481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:56:46.889490 kernel: landlock: Up and running. Sep 12 23:56:46.889497 kernel: SELinux: Initializing. Sep 12 23:56:46.889504 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:56:46.889511 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:56:46.889518 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:56:46.889525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:56:46.889532 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:56:46.889540 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:56:46.889547 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 23:56:46.889556 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 23:56:46.889563 kernel: Remapping and enabling EFI services. Sep 12 23:56:46.889570 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:56:46.889577 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:56:46.889584 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:56:46.889592 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 12 23:56:46.890634 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:56:46.890649 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:56:46.890656 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:56:46.890664 kernel: SMP: Total of 2 processors activated. Sep 12 23:56:46.890676 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:56:46.890684 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:56:46.890697 kernel: CPU features: detected: Common not Private translations Sep 12 23:56:46.890706 kernel: CPU features: detected: CRC32 instructions Sep 12 23:56:46.890713 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:56:46.890721 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:56:46.890728 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:56:46.890736 kernel: CPU features: detected: Privileged Access Never Sep 12 23:56:46.890743 kernel: CPU features: detected: RAS Extension Support Sep 12 23:56:46.890752 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:56:46.890760 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:56:46.890767 kernel: alternatives: applying system-wide alternatives Sep 12 23:56:46.890775 kernel: devtmpfs: initialized Sep 12 23:56:46.890782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:56:46.890790 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:56:46.890797 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:56:46.890807 kernel: SMBIOS 3.0.0 present. Sep 12 23:56:46.890814 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 12 23:56:46.890822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:56:46.890829 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:56:46.890837 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:56:46.890845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:56:46.890852 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:56:46.890860 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Sep 12 23:56:46.890867 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:56:46.890876 kernel: cpuidle: using governor menu Sep 12 23:56:46.890885 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:56:46.890892 kernel: ASID allocator initialised with 32768 entries Sep 12 23:56:46.890900 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:56:46.890907 kernel: Serial: AMBA PL011 UART driver Sep 12 23:56:46.890915 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:56:46.890922 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:56:46.890930 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:56:46.890938 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:56:46.890948 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:56:46.890955 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:56:46.890962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:56:46.890970 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:56:46.890977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:56:46.890984 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:56:46.890991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:56:46.890999 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:56:46.891006 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:56:46.891015 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:56:46.891023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:56:46.891031 kernel: ACPI: Interpreter enabled Sep 12 23:56:46.891038 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:56:46.891045 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:56:46.891053 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:56:46.891061 kernel: printk: console [ttyAMA0] enabled Sep 12 23:56:46.891068 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:56:46.891258 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:56:46.891382 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:56:46.891466 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:56:46.891532 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:56:46.891656 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:56:46.891670 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:56:46.891678 kernel: PCI host bridge to bus 0000:00 Sep 12 23:56:46.891773 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:56:46.891843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:56:46.891905 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:56:46.891963 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:56:46.892047 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 23:56:46.892125 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 12 23:56:46.892223 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 12 23:56:46.892301 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 23:56:46.892397 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.892470 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 12 23:56:46.892545 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.892629 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 12 23:56:46.892708 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.892776 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 12 23:56:46.892856 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.892933 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 12 23:56:46.893018 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.893102 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 12 23:56:46.893197 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.893268 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 12 23:56:46.893388 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.893470 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 12 23:56:46.893546 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.898783 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 12 23:56:46.898934 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:46.899006 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 12 23:56:46.899092 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 12 23:56:46.899159 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 12 23:56:46.899259 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 23:56:46.899335 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 12 23:56:46.899433 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:56:46.899523 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 23:56:46.899689 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 23:56:46.899768 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 12 23:56:46.899849 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 23:56:46.899921 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 12 23:56:46.899997 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 12 23:56:46.900089 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 23:56:46.900166 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 12 23:56:46.900258 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 23:56:46.900361 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 12 23:56:46.900450 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 23:56:46.900526 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 12 23:56:46.901690 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 23:56:46.901867 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 23:56:46.901955 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 12 23:56:46.902028 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 12 23:56:46.902106 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 23:56:46.902187 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 12 23:56:46.902258 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 12 23:56:46.902330 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 12 23:56:46.902436 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 12 23:56:46.902511 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 12 23:56:46.902582 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 12 23:56:46.902674 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 23:56:46.902749 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 12 23:56:46.902842 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 12 23:56:46.902921 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 23:56:46.902997 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 12 23:56:46.903075 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 12 23:56:46.903153 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 23:56:46.903220 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 12 23:56:46.903286 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 12 23:56:46.903372 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 23:56:46.903444 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 12 23:56:46.903510 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 12 23:56:46.903587 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 23:56:46.904924 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 12 23:56:46.905012 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 12 23:56:46.905088 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 23:56:46.905159 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 12 23:56:46.905225 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 12 23:56:46.905298 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 23:56:46.905425 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 12 23:56:46.905533 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 12 23:56:46.905634 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 12 23:56:46.905705 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:46.905777 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 12 23:56:46.905844 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:46.905914 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 12 23:56:46.905983 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:46.906059 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 12 23:56:46.906130 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:46.906201 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 12 23:56:46.906269 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:46.906352 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 12 23:56:46.906432 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:46.906512 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 12 23:56:46.906582 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:46.907839 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 12 23:56:46.907939 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:46.908012 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 12 23:56:46.908081 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:46.908156 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 12 23:56:46.908231 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 12 23:56:46.908301 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 12 23:56:46.908436 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 23:56:46.908515 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 12 23:56:46.908582 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 23:56:46.909028 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 12 23:56:46.909107 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 23:56:46.909178 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 12 23:56:46.909251 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 23:56:46.909320 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 12 23:56:46.909411 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 23:56:46.909485 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 12 23:56:46.909551 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 23:56:46.910692 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 12 23:56:46.910787 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 23:56:46.910859 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 12 23:56:46.910934 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 23:56:46.911005 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 12 23:56:46.911071 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 12 23:56:46.911145 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 12 23:56:46.911224 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 12 23:56:46.911294 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:56:46.911387 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 12 23:56:46.911462 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 23:56:46.911535 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 23:56:46.911623 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 12 23:56:46.911700 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:46.911776 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 12 23:56:46.911848 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 23:56:46.911921 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 23:56:46.911987 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 12 23:56:46.912054 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:46.912128 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 23:56:46.912197 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 12 23:56:46.912266 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 23:56:46.912333 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 23:56:46.912421 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 12 23:56:46.912490 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:46.912566 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 23:56:46.912866 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 23:56:46.912946 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 23:56:46.913011 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 12 23:56:46.913075 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:46.913474 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 12 23:56:46.913692 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 23:56:46.913780 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 23:56:46.913847 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 12 23:56:46.913911 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:46.913987 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 12 23:56:46.914056 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 12 23:56:46.914129 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 23:56:46.914196 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 23:56:46.914266 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 12 23:56:46.914333 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:46.914428 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 12 23:56:46.914499 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 12 23:56:46.914568 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 12 23:56:46.914652 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 23:56:46.914720 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 23:56:46.914786 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 12 23:56:46.914857 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:46.914927 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 23:56:46.915004 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 23:56:46.915071 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 12 23:56:46.915148 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:46.915222 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 23:56:46.915290 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 12 23:56:46.915369 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 12 23:56:46.915443 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:46.915523 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:56:46.915591 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:56:46.915889 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:56:46.915968 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 23:56:46.916031 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 12 23:56:46.916096 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:46.916174 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 12 23:56:46.916235 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 12 23:56:46.916297 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:46.916441 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 12 23:56:46.916525 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 12 23:56:46.916587 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:46.916687 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 12 23:56:46.916751 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 12 23:56:46.916815 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:46.916902 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 12 23:56:46.916971 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 12 23:56:46.917034 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:46.917106 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 12 23:56:46.917171 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 12 23:56:46.917232 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:46.917308 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 12 23:56:46.917391 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 12 23:56:46.917466 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:46.917538 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 12 23:56:46.917649 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 12 23:56:46.917723 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:46.917798 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 12 23:56:46.917861 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 12 23:56:46.917924 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:46.917937 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:56:46.917946 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:56:46.917956 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:56:46.917964 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:56:46.917972 kernel: iommu: Default domain type: Translated Sep 12 23:56:46.917980 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:56:46.917988 kernel: efivars: Registered efivars operations Sep 12 23:56:46.917995 kernel: vgaarb: loaded Sep 12 23:56:46.918003 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:56:46.918013 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:56:46.918021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:56:46.918029 kernel: pnp: PnP ACPI init Sep 12 23:56:46.918108 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:56:46.918120 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:56:46.918128 kernel: NET: Registered PF_INET protocol family Sep 12 23:56:46.918136 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:56:46.918144 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:56:46.918155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:56:46.918163 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:56:46.918171 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:56:46.918178 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:56:46.918186 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:56:46.918194 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:56:46.918203 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:56:46.918285 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 12 23:56:46.918297 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:56:46.918308 kernel: kvm [1]: HYP mode not available Sep 12 23:56:46.918316 kernel: Initialise system trusted keyrings Sep 12 23:56:46.918324 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:56:46.918332 kernel: Key type asymmetric registered Sep 12 23:56:46.918351 kernel: Asymmetric key parser 'x509' registered Sep 12 23:56:46.918360 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:56:46.918368 kernel: io scheduler mq-deadline registered Sep 12 23:56:46.918376 kernel: io scheduler kyber registered Sep 12 23:56:46.918384 kernel: io scheduler bfq registered Sep 12 23:56:46.918395 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:56:46.918486 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 12 23:56:46.918558 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 12 23:56:46.918666 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.918742 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 12 23:56:46.918811 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 12 23:56:46.918878 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.918957 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 12 23:56:46.919027 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 12 23:56:46.919096 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.919167 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 12 23:56:46.919235 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 12 23:56:46.919768 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.919857 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 12 23:56:46.919925 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 12 23:56:46.919991 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.920061 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 12 23:56:46.920127 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 12 23:56:46.920199 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.920301 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 12 23:56:46.920391 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 12 23:56:46.920460 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.920531 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 12 23:56:46.923704 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 12 23:56:46.923911 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.923926 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 12 23:56:46.924030 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 12 23:56:46.924121 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 12 23:56:46.924202 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:46.924217 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:56:46.924226 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:56:46.924236 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:56:46.924331 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 12 23:56:46.924487 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 12 23:56:46.924508 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:56:46.924517 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:56:46.924639 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 12 23:56:46.924654 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 12 23:56:46.924663 kernel: thunder_xcv, ver 1.0 Sep 12 23:56:46.924672 kernel: thunder_bgx, ver 1.0 Sep 12 23:56:46.924685 kernel: nicpf, ver 1.0 Sep 12 23:56:46.924693 kernel: nicvf, ver 1.0 Sep 12 23:56:46.924790 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:56:46.924860 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:56:46 UTC (1757721406) Sep 12 23:56:46.924872 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:56:46.924880 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 23:56:46.924891 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:56:46.924900 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:56:46.924912 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:56:46.924921 kernel: Segment Routing with IPv6 Sep 12 23:56:46.924930 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:56:46.924938 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:56:46.924946 kernel: Key type dns_resolver registered Sep 12 23:56:46.924954 kernel: registered taskstats version 1 Sep 12 23:56:46.924961 kernel: Loading compiled-in X.509 certificates Sep 12 23:56:46.924969 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:56:46.924977 kernel: Key type .fscrypt registered Sep 12 23:56:46.924985 kernel: Key type fscrypt-provisioning registered Sep 12 23:56:46.924995 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:56:46.925002 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:56:46.925010 kernel: ima: No architecture policies found Sep 12 23:56:46.925018 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:56:46.925028 kernel: clk: Disabling unused clocks Sep 12 23:56:46.925037 kernel: Freeing unused kernel memory: 39488K Sep 12 23:56:46.925046 kernel: Run /init as init process Sep 12 23:56:46.925055 kernel: with arguments: Sep 12 23:56:46.925066 kernel: /init Sep 12 23:56:46.925075 kernel: with environment: Sep 12 23:56:46.925084 kernel: HOME=/ Sep 12 23:56:46.925093 kernel: TERM=linux Sep 12 23:56:46.925101 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:56:46.925114 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:56:46.925125 systemd[1]: Detected virtualization kvm. Sep 12 23:56:46.925135 systemd[1]: Detected architecture arm64. Sep 12 23:56:46.925147 systemd[1]: Running in initrd. Sep 12 23:56:46.925156 systemd[1]: No hostname configured, using default hostname. Sep 12 23:56:46.925166 systemd[1]: Hostname set to . Sep 12 23:56:46.925177 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:56:46.925186 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:56:46.925197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:46.925206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:46.925216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:56:46.925228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:56:46.925238 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:56:46.925248 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:56:46.925259 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:56:46.925268 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:56:46.925277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:46.925285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:46.925296 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:56:46.925304 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:56:46.925312 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:56:46.925320 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:56:46.925330 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:56:46.925350 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:56:46.925358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:56:46.925367 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:56:46.925377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:46.925386 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:46.925394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:46.925402 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:56:46.925411 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:56:46.925419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:56:46.925427 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:56:46.925435 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:56:46.925444 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:56:46.925454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:56:46.925463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:46.925472 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:56:46.925480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:46.925488 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:56:46.925497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:56:46.925536 systemd-journald[237]: Collecting audit messages is disabled. Sep 12 23:56:46.925557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:46.925568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:56:46.925576 kernel: Bridge firewalling registered Sep 12 23:56:46.925585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:46.925593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:46.926932 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:56:46.926952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:56:46.926961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:56:46.926974 systemd-journald[237]: Journal started Sep 12 23:56:46.927005 systemd-journald[237]: Runtime Journal (/run/log/journal/07e3b4878b9c44aca4ba86cd01b2d704) is 8.0M, max 76.6M, 68.6M free. Sep 12 23:56:46.884352 systemd-modules-load[238]: Inserted module 'overlay' Sep 12 23:56:46.904114 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 12 23:56:46.931423 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:56:46.937707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:46.948988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:56:46.950791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:46.952580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:46.959917 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:56:46.960782 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:46.971867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:56:46.979471 dracut-cmdline[271]: dracut-dracut-053 Sep 12 23:56:46.983616 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:56:47.005439 systemd-resolved[273]: Positive Trust Anchors: Sep 12 23:56:47.005457 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:56:47.005489 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:56:47.011413 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 12 23:56:47.012563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:56:47.013311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:47.087697 kernel: SCSI subsystem initialized Sep 12 23:56:47.092644 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:56:47.100657 kernel: iscsi: registered transport (tcp) Sep 12 23:56:47.114646 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:56:47.114714 kernel: QLogic iSCSI HBA Driver Sep 12 23:56:47.169334 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:56:47.174991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:56:47.198938 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:56:47.199030 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:56:47.199677 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:56:47.252645 kernel: raid6: neonx8 gen() 15546 MB/s Sep 12 23:56:47.269672 kernel: raid6: neonx4 gen() 13637 MB/s Sep 12 23:56:47.286672 kernel: raid6: neonx2 gen() 12902 MB/s Sep 12 23:56:47.303715 kernel: raid6: neonx1 gen() 10262 MB/s Sep 12 23:56:47.320663 kernel: raid6: int64x8 gen() 6796 MB/s Sep 12 23:56:47.337725 kernel: raid6: int64x4 gen() 7233 MB/s Sep 12 23:56:47.354694 kernel: raid6: int64x2 gen() 5960 MB/s Sep 12 23:56:47.371673 kernel: raid6: int64x1 gen() 4943 MB/s Sep 12 23:56:47.371763 kernel: raid6: using algorithm neonx8 gen() 15546 MB/s Sep 12 23:56:47.388745 kernel: raid6: .... xor() 11828 MB/s, rmw enabled Sep 12 23:56:47.388825 kernel: raid6: using neon recovery algorithm Sep 12 23:56:47.393642 kernel: xor: measuring software checksum speed Sep 12 23:56:47.393712 kernel: 8regs : 19773 MB/sec Sep 12 23:56:47.394768 kernel: 32regs : 17780 MB/sec Sep 12 23:56:47.394826 kernel: arm64_neon : 20749 MB/sec Sep 12 23:56:47.394849 kernel: xor: using function: arm64_neon (20749 MB/sec) Sep 12 23:56:47.449691 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:56:47.467559 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:56:47.472830 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:47.496638 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 12 23:56:47.500245 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:47.509904 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:56:47.528169 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Sep 12 23:56:47.569466 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:56:47.575904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:56:47.633462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:47.640477 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:56:47.663038 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:56:47.664913 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:56:47.665551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:47.666882 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:56:47.674922 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:56:47.693711 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:56:47.744411 kernel: scsi host0: Virtio SCSI HBA Sep 12 23:56:47.753688 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:56:47.753821 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 23:56:47.770070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:56:47.770215 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:47.771776 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:47.774074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:47.774248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:47.778318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:47.783695 kernel: ACPI: bus type USB registered Sep 12 23:56:47.789001 kernel: usbcore: registered new interface driver usbfs Sep 12 23:56:47.789634 kernel: usbcore: registered new interface driver hub Sep 12 23:56:47.792000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:47.800760 kernel: usbcore: registered new device driver usb Sep 12 23:56:47.814554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:47.825027 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 23:56:47.825253 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 23:56:47.825370 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 23:56:47.826858 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 23:56:47.827133 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 23:56:47.827229 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 23:56:47.828680 kernel: hub 1-0:1.0: USB hub found Sep 12 23:56:47.828871 kernel: hub 1-0:1.0: 4 ports detected Sep 12 23:56:47.829002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:47.832618 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 23:56:47.834691 kernel: hub 2-0:1.0: USB hub found Sep 12 23:56:47.834973 kernel: hub 2-0:1.0: 4 ports detected Sep 12 23:56:47.836822 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 12 23:56:47.841617 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 12 23:56:47.842427 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:56:47.844642 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:56:47.852916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:47.862894 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 12 23:56:47.863102 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 23:56:47.863195 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 12 23:56:47.863277 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 12 23:56:47.863378 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 23:56:47.867674 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:56:47.867742 kernel: GPT:17805311 != 80003071 Sep 12 23:56:47.867761 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:56:47.867772 kernel: GPT:17805311 != 80003071 Sep 12 23:56:47.869623 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:56:47.869675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:47.869686 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 12 23:56:47.911628 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (512) Sep 12 23:56:47.921767 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (499) Sep 12 23:56:47.922970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 23:56:47.930376 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 23:56:47.945919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 23:56:47.950276 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 23:56:47.951019 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 23:56:47.958895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:56:47.969181 disk-uuid[571]: Primary Header is updated. Sep 12 23:56:47.969181 disk-uuid[571]: Secondary Entries is updated. Sep 12 23:56:47.969181 disk-uuid[571]: Secondary Header is updated. Sep 12 23:56:47.977626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:47.983641 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:48.074776 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 23:56:48.210816 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 12 23:56:48.210900 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 23:56:48.211227 kernel: usbcore: registered new interface driver usbhid Sep 12 23:56:48.211252 kernel: usbhid: USB HID core driver Sep 12 23:56:48.319638 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 12 23:56:48.451633 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 12 23:56:48.506841 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 12 23:56:48.993567 disk-uuid[572]: The operation has completed successfully. Sep 12 23:56:48.994281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:49.050138 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:56:49.050254 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:56:49.066900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:56:49.073593 sh[590]: Success Sep 12 23:56:49.087783 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:56:49.149722 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:56:49.152835 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:56:49.158066 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:56:49.188120 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:56:49.188183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:49.188195 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:56:49.188983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:56:49.189013 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:56:49.195683 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:56:49.197869 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:56:49.200158 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:56:49.209925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:56:49.216851 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:56:49.228658 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:49.228728 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:49.228755 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:49.233830 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:49.233933 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:49.248179 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:56:49.249847 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:49.255971 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:56:49.263889 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:56:49.371400 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:56:49.376356 ignition[677]: Ignition 2.19.0 Sep 12 23:56:49.376367 ignition[677]: Stage: fetch-offline Sep 12 23:56:49.379859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:56:49.376408 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:49.381034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:56:49.376417 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:49.376591 ignition[677]: parsed url from cmdline: "" Sep 12 23:56:49.376594 ignition[677]: no config URL provided Sep 12 23:56:49.376612 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:56:49.376636 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:56:49.376642 ignition[677]: failed to fetch config: resource requires networking Sep 12 23:56:49.376990 ignition[677]: Ignition finished successfully Sep 12 23:56:49.408131 systemd-networkd[777]: lo: Link UP Sep 12 23:56:49.408145 systemd-networkd[777]: lo: Gained carrier Sep 12 23:56:49.410223 systemd-networkd[777]: Enumeration completed Sep 12 23:56:49.410871 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:56:49.412670 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:49.412677 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:49.412960 systemd[1]: Reached target network.target - Network. Sep 12 23:56:49.413685 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:49.413688 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:49.414253 systemd-networkd[777]: eth0: Link UP Sep 12 23:56:49.414257 systemd-networkd[777]: eth0: Gained carrier Sep 12 23:56:49.414265 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:49.420925 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:56:49.422693 systemd-networkd[777]: eth1: Link UP Sep 12 23:56:49.422700 systemd-networkd[777]: eth1: Gained carrier Sep 12 23:56:49.422711 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:49.438757 ignition[780]: Ignition 2.19.0 Sep 12 23:56:49.438768 ignition[780]: Stage: fetch Sep 12 23:56:49.438950 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:49.438960 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:49.439053 ignition[780]: parsed url from cmdline: "" Sep 12 23:56:49.439056 ignition[780]: no config URL provided Sep 12 23:56:49.439061 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:56:49.439068 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:56:49.439086 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 23:56:49.439924 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 12 23:56:49.456729 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 23:56:49.482739 systemd-networkd[777]: eth0: DHCPv4 address 128.140.85.90/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 23:56:49.640161 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 12 23:56:49.647534 ignition[780]: GET result: OK Sep 12 23:56:49.647717 ignition[780]: parsing config with SHA512: 8f61990ef9e5071d16080b18695b0cd8f155921b154788c81592a095246303982af8a584d4d0994b03bf859bc87ec436a7ef0506adbb5312d6621702439fe81d Sep 12 23:56:49.653117 unknown[780]: fetched base config from "system" Sep 12 23:56:49.653144 unknown[780]: fetched base config from "system" Sep 12 23:56:49.653577 ignition[780]: fetch: fetch complete Sep 12 23:56:49.653149 unknown[780]: fetched user config from "hetzner" Sep 12 23:56:49.653582 ignition[780]: fetch: fetch passed Sep 12 23:56:49.653994 ignition[780]: Ignition finished successfully Sep 12 23:56:49.656890 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:56:49.663146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:56:49.680496 ignition[787]: Ignition 2.19.0 Sep 12 23:56:49.680510 ignition[787]: Stage: kargs Sep 12 23:56:49.680742 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:49.680753 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:49.681830 ignition[787]: kargs: kargs passed Sep 12 23:56:49.681891 ignition[787]: Ignition finished successfully Sep 12 23:56:49.683551 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:56:49.692922 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:56:49.709517 ignition[795]: Ignition 2.19.0 Sep 12 23:56:49.709531 ignition[795]: Stage: disks Sep 12 23:56:49.709817 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:49.709830 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:49.711018 ignition[795]: disks: disks passed Sep 12 23:56:49.713271 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:56:49.711091 ignition[795]: Ignition finished successfully Sep 12 23:56:49.715024 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:56:49.715781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:56:49.716914 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:56:49.717779 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:56:49.719748 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:56:49.731941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:56:49.754086 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 23:56:49.758685 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:56:49.768781 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:56:49.825618 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:56:49.826771 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:56:49.829048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:56:49.839890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:56:49.843678 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:56:49.848140 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:56:49.849967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:56:49.851553 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:56:49.856309 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:56:49.862623 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Sep 12 23:56:49.864644 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:49.864692 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:49.864705 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:49.869908 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:56:49.876045 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:49.876112 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:49.880682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:56:49.933772 coreos-metadata[813]: Sep 12 23:56:49.933 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 23:56:49.935996 coreos-metadata[813]: Sep 12 23:56:49.935 INFO Fetch successful Sep 12 23:56:49.940638 coreos-metadata[813]: Sep 12 23:56:49.938 INFO wrote hostname ci-4081-3-5-n-f526684106 to /sysroot/etc/hostname Sep 12 23:56:49.942031 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:56:49.944899 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:56:49.950998 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:56:49.956455 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:56:49.961543 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:56:50.079805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:56:50.086796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:56:50.093063 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:56:50.101701 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:50.132727 ignition[928]: INFO : Ignition 2.19.0 Sep 12 23:56:50.132727 ignition[928]: INFO : Stage: mount Sep 12 23:56:50.132727 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:50.132727 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:50.132727 ignition[928]: INFO : mount: mount passed Sep 12 23:56:50.137153 ignition[928]: INFO : Ignition finished successfully Sep 12 23:56:50.136722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:56:50.138834 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:56:50.145869 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:56:50.188795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:56:50.197579 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:56:50.213721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 12 23:56:50.216731 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:50.216806 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:50.216818 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:50.221699 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:50.221775 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:50.225264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:56:50.247177 ignition[956]: INFO : Ignition 2.19.0 Sep 12 23:56:50.247177 ignition[956]: INFO : Stage: files Sep 12 23:56:50.248530 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:50.248530 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:50.250430 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:56:50.250430 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:56:50.250430 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:56:50.253981 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:56:50.254920 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:56:50.255954 unknown[956]: wrote ssh authorized keys file for user: core Sep 12 23:56:50.257179 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:56:50.258980 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:56:50.260202 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 23:56:50.391830 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:56:50.743034 systemd-networkd[777]: eth0: Gained IPv6LL Sep 12 23:56:51.383127 systemd-networkd[777]: eth1: Gained IPv6LL Sep 12 23:56:51.804073 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:56:51.804073 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:56:51.807972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 23:56:52.166157 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:56:53.421790 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:56:53.421790 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:56:53.425425 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:56:53.425425 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:56:53.425425 ignition[956]: INFO : files: files passed Sep 12 23:56:53.425425 ignition[956]: INFO : Ignition finished successfully Sep 12 23:56:53.428474 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:56:53.435892 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:56:53.441808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:56:53.449071 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:56:53.449198 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:56:53.460808 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:53.460808 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:53.463048 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:53.464815 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:56:53.466115 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:56:53.471886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:56:53.501573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:56:53.501752 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:56:53.503819 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:56:53.504738 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:56:53.506035 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:56:53.518498 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:56:53.533692 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:56:53.539844 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:56:53.553448 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:53.554284 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:53.555735 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:56:53.556825 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:56:53.556975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:56:53.558218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:56:53.558870 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:56:53.559920 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:56:53.560999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:56:53.562251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:56:53.563422 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:56:53.564555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:56:53.565835 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:56:53.567043 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:56:53.568651 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:56:53.569533 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:56:53.569677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:56:53.570960 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:53.571621 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:53.572658 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:56:53.574683 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:53.576464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:56:53.576676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:56:53.578533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:56:53.578685 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:56:53.579923 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:56:53.580017 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:56:53.580989 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:56:53.581083 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:56:53.594003 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:56:53.595981 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:56:53.596236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:53.599857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:56:53.601868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:56:53.602066 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:53.603969 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:56:53.604398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:56:53.617794 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:56:53.617932 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:56:53.623336 ignition[1009]: INFO : Ignition 2.19.0 Sep 12 23:56:53.623336 ignition[1009]: INFO : Stage: umount Sep 12 23:56:53.626069 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:53.626069 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:53.626069 ignition[1009]: INFO : umount: umount passed Sep 12 23:56:53.626069 ignition[1009]: INFO : Ignition finished successfully Sep 12 23:56:53.627271 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:56:53.627466 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:56:53.628214 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:56:53.628264 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:56:53.630905 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:56:53.630966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:56:53.632825 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:56:53.632926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:56:53.634547 systemd[1]: Stopped target network.target - Network. Sep 12 23:56:53.635807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:56:53.635894 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:56:53.636915 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:56:53.638292 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:56:53.638394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:53.639509 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:56:53.640163 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:56:53.640807 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:56:53.640863 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:56:53.641830 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:56:53.641872 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:56:53.643639 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:56:53.643697 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:56:53.645288 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:56:53.645386 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:56:53.646385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:56:53.647416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:56:53.650099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:56:53.650736 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:56:53.650830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:56:53.652061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:56:53.652152 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:56:53.653151 systemd-networkd[777]: eth0: DHCPv6 lease lost Sep 12 23:56:53.655079 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:56:53.655190 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:56:53.657003 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:56:53.657100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:53.658992 systemd-networkd[777]: eth1: DHCPv6 lease lost Sep 12 23:56:53.662933 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:56:53.663089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:56:53.664096 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:56:53.664133 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:53.670879 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:56:53.671405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:56:53.671480 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:56:53.674884 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:56:53.674954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:53.676100 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:56:53.676157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:53.677001 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:53.696023 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:56:53.696249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:53.698006 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:56:53.698080 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:53.700053 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:56:53.700107 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:53.701796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:56:53.701856 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:56:53.703708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:56:53.703759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:56:53.705206 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:56:53.705254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:53.713923 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:56:53.714524 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:56:53.714594 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:53.716968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:53.717023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:53.718457 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:56:53.718585 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:56:53.728046 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:56:53.728182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:56:53.729790 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:56:53.739033 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:56:53.750402 systemd[1]: Switching root. Sep 12 23:56:53.797459 systemd-journald[237]: Journal stopped Sep 12 23:56:54.699798 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 12 23:56:54.699866 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:56:54.699878 kernel: SELinux: policy capability open_perms=1 Sep 12 23:56:54.699888 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:56:54.699902 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:56:54.699915 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:56:54.699924 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:56:54.699936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:56:54.699946 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:56:54.699956 systemd[1]: Successfully loaded SELinux policy in 34.913ms. Sep 12 23:56:54.699978 kernel: audit: type=1403 audit(1757721413.966:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:56:54.699988 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.474ms. Sep 12 23:56:54.700000 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:56:54.700011 systemd[1]: Detected virtualization kvm. Sep 12 23:56:54.700023 systemd[1]: Detected architecture arm64. Sep 12 23:56:54.700034 systemd[1]: Detected first boot. Sep 12 23:56:54.700044 systemd[1]: Hostname set to . Sep 12 23:56:54.700054 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:56:54.700068 zram_generator::config[1051]: No configuration found. Sep 12 23:56:54.700082 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:56:54.700093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:56:54.700104 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:56:54.700115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:56:54.700126 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:56:54.700136 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:56:54.700147 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:56:54.700156 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:56:54.700167 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:56:54.700177 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:56:54.700188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:56:54.700199 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:56:54.700210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:54.700220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:54.700231 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:56:54.700241 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:56:54.700251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:56:54.700262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:56:54.700272 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:56:54.700282 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:54.700295 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:56:54.700306 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:56:54.700327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:56:54.700339 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:56:54.700349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:54.700360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:56:54.700373 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:56:54.700384 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:56:54.700394 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:56:54.700405 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:56:54.700415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:54.700425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:54.700436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:54.700446 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:56:54.700457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:56:54.700469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:56:54.700479 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:56:54.700493 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:56:54.700503 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:56:54.700514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:56:54.700525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:56:54.700536 systemd[1]: Reached target machines.target - Containers. Sep 12 23:56:54.700547 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:56:54.700557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:54.700569 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:56:54.700580 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:56:54.700590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:54.701635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:56:54.701657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:54.701671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:56:54.701681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:54.701692 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:56:54.701702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:56:54.701714 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:56:54.701724 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:56:54.701735 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:56:54.701745 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:56:54.701755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:56:54.701768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:56:54.701779 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:56:54.701819 systemd-journald[1121]: Collecting audit messages is disabled. Sep 12 23:56:54.701842 kernel: loop: module loaded Sep 12 23:56:54.701854 kernel: fuse: init (API version 7.39) Sep 12 23:56:54.701864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:56:54.701875 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:56:54.701888 systemd[1]: Stopped verity-setup.service. Sep 12 23:56:54.701900 systemd-journald[1121]: Journal started Sep 12 23:56:54.701922 systemd-journald[1121]: Runtime Journal (/run/log/journal/07e3b4878b9c44aca4ba86cd01b2d704) is 8.0M, max 76.6M, 68.6M free. Sep 12 23:56:54.477613 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:56:54.496779 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 23:56:54.497200 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:56:54.703058 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:56:54.706551 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:56:54.707880 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:56:54.709800 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:56:54.710428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:56:54.712519 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:56:54.712641 kernel: ACPI: bus type drm_connector registered Sep 12 23:56:54.714925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:56:54.718004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:54.718990 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:56:54.719143 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:56:54.720085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:54.720653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:54.721939 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:56:54.722081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:56:54.723979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:54.724126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:54.726250 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:56:54.726403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:56:54.729044 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:54.729849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:54.736172 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:56:54.737743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:56:54.739446 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:56:54.740836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:54.749296 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:56:54.757838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:56:54.762757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:56:54.763550 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:56:54.763735 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:56:54.765477 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:56:54.781870 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:56:54.787285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:56:54.788363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:54.797084 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:56:54.801703 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:56:54.802365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:54.803479 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:56:54.806792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:54.808796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:56:54.814876 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:56:54.820750 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:56:54.823281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:56:54.825527 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:56:54.828143 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:56:54.840568 systemd-journald[1121]: Time spent on flushing to /var/log/journal/07e3b4878b9c44aca4ba86cd01b2d704 is 51.141ms for 1123 entries. Sep 12 23:56:54.840568 systemd-journald[1121]: System Journal (/var/log/journal/07e3b4878b9c44aca4ba86cd01b2d704) is 8.0M, max 584.8M, 576.8M free. Sep 12 23:56:54.918056 systemd-journald[1121]: Received client request to flush runtime journal. Sep 12 23:56:54.918118 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 23:56:54.918176 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:56:54.866975 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:56:54.870012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:56:54.875963 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:56:54.882359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:54.914214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:54.925442 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:56:54.927241 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:56:54.931629 kernel: loop1: detected capacity change from 0 to 8 Sep 12 23:56:54.947471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:56:54.952775 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:56:54.958638 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 23:56:54.963202 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:56:54.973251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:56:54.984860 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:56:55.003658 kernel: loop3: detected capacity change from 0 to 211168 Sep 12 23:56:55.037382 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 12 23:56:55.038102 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 12 23:56:55.039635 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 23:56:55.049848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:55.057857 kernel: loop5: detected capacity change from 0 to 8 Sep 12 23:56:55.062740 kernel: loop6: detected capacity change from 0 to 114328 Sep 12 23:56:55.074681 kernel: loop7: detected capacity change from 0 to 211168 Sep 12 23:56:55.096465 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 23:56:55.096971 (sd-merge)[1190]: Merged extensions into '/usr'. Sep 12 23:56:55.103152 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:56:55.103517 systemd[1]: Reloading... Sep 12 23:56:55.198668 zram_generator::config[1218]: No configuration found. Sep 12 23:56:55.368722 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:56:55.400574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:56:55.448778 systemd[1]: Reloading finished in 344 ms. Sep 12 23:56:55.487252 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:56:55.488874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:56:55.497970 systemd[1]: Starting ensure-sysext.service... Sep 12 23:56:55.501893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:56:55.510102 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:56:55.510247 systemd[1]: Reloading... Sep 12 23:56:55.549042 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:56:55.549384 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:56:55.550090 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:56:55.550395 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 12 23:56:55.550441 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 12 23:56:55.559768 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:56:55.559992 systemd-tmpfiles[1256]: Skipping /boot Sep 12 23:56:55.570025 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:56:55.570258 systemd-tmpfiles[1256]: Skipping /boot Sep 12 23:56:55.613689 zram_generator::config[1282]: No configuration found. Sep 12 23:56:55.721152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:56:55.770406 systemd[1]: Reloading finished in 259 ms. Sep 12 23:56:55.789784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:56:55.790924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:55.811046 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:56:55.814986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:56:55.818944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:56:55.825751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:56:55.830847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:55.832881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:56:55.836592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:55.845929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:55.851862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:55.860104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:55.861160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:55.867913 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:56:55.869861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:55.871680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:55.879948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:55.887638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:55.889784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:55.890576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:55.890805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:55.892471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:55.898150 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:56:55.904928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:55.908392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:56:55.912058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:55.913048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:55.915646 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:56:55.928086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:56:55.933693 systemd[1]: Finished ensure-sysext.service. Sep 12 23:56:55.934547 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:55.934736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:55.936952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:55.937087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:55.943065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:55.952001 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:56:55.954052 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Sep 12 23:56:55.965069 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:56:55.965234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:56:55.969957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:55.973706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:55.974913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:55.990057 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:56:55.994555 augenrules[1361]: No rules Sep 12 23:56:55.998400 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:56:56.002591 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:56:56.004395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:56:56.009385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:56:56.017746 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:56.022844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:56:56.073428 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:56:56.074284 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:56:56.124039 systemd-resolved[1325]: Positive Trust Anchors: Sep 12 23:56:56.124071 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:56:56.124109 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:56:56.134553 systemd-resolved[1325]: Using system hostname 'ci-4081-3-5-n-f526684106'. Sep 12 23:56:56.137427 systemd-networkd[1376]: lo: Link UP Sep 12 23:56:56.137526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:56:56.138787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:56.139519 systemd-networkd[1376]: lo: Gained carrier Sep 12 23:56:56.141002 systemd-networkd[1376]: Enumeration completed Sep 12 23:56:56.141107 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:56:56.141776 systemd[1]: Reached target network.target - Network. Sep 12 23:56:56.158958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:56:56.164957 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:56:56.207967 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:56.207977 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:56.208681 systemd-networkd[1376]: eth1: Link UP Sep 12 23:56:56.208685 systemd-networkd[1376]: eth1: Gained carrier Sep 12 23:56:56.208701 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:56.246854 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 23:56:56.247896 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:56.279089 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:56.279104 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:56.281858 systemd-networkd[1376]: eth0: Link UP Sep 12 23:56:56.281869 systemd-networkd[1376]: eth0: Gained carrier Sep 12 23:56:56.281900 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:56.282265 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:56.283981 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:56.294654 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1393) Sep 12 23:56:56.334900 systemd-networkd[1376]: eth0: DHCPv4 address 128.140.85.90/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 23:56:56.335396 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:56.335823 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:56.355668 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:56:56.359253 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 12 23:56:56.359971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:56.367157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:56.371215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:56.377838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:56.378797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:56.378835 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:56:56.381109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 23:56:56.386872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:56.387079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:56.391049 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 12 23:56:56.391145 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 23:56:56.391159 kernel: [drm] features: -context_init Sep 12 23:56:56.391171 kernel: [drm] number of scanouts: 1 Sep 12 23:56:56.388712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:56.389654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:56.395817 kernel: [drm] number of cap sets: 0 Sep 12 23:56:56.400894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:56:56.401785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:56.408015 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:56.408165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:56.409110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:56.420725 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 23:56:56.424706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:56:56.446649 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 23:56:56.459662 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 23:56:56.468977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:56.478423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:56.480691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:56.488975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:56.555613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:56.620076 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:56:56.632969 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:56:56.646536 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:56:56.680824 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:56:56.681998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:56.682836 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:56:56.683664 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:56:56.684816 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:56:56.685952 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:56:56.686780 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:56:56.687723 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:56:56.688445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:56:56.688486 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:56:56.689044 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:56:56.693658 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:56:56.696656 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:56:56.703579 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:56:56.708217 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:56:56.710750 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:56:56.712020 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:56:56.712790 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:56:56.713568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:56:56.713612 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:56:56.723647 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:56:56.727087 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:56:56.729320 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:56:56.732905 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:56:56.738161 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:56:56.744580 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:56:56.746700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:56:56.748098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:56:56.751662 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:56:56.755202 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 23:56:56.759896 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:56:56.763449 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:56:56.770900 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:56:56.776129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:56:56.776780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:56:56.777586 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:56:56.780944 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:56:56.797709 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:56:56.804617 jq[1447]: false Sep 12 23:56:56.810459 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:56:56.811008 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:56:56.835812 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:56:56.853934 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:56:56.854126 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:56:56.858262 jq[1458]: true Sep 12 23:56:56.864932 dbus-daemon[1446]: [system] SELinux support is enabled Sep 12 23:56:56.865693 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:56:56.878026 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:56:56.878065 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:56:56.879814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:56:56.879834 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:56:56.883446 extend-filesystems[1448]: Found loop4 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found loop5 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found loop6 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found loop7 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda1 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda2 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda3 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found usr Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda4 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda6 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda7 Sep 12 23:56:56.883446 extend-filesystems[1448]: Found sda9 Sep 12 23:56:56.883446 extend-filesystems[1448]: Checking size of /dev/sda9 Sep 12 23:56:56.903628 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:56:56.903874 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:56:56.915198 coreos-metadata[1445]: Sep 12 23:56:56.915 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 23:56:56.920780 update_engine[1457]: I20250912 23:56:56.920543 1457 main.cc:92] Flatcar Update Engine starting Sep 12 23:56:56.922741 coreos-metadata[1445]: Sep 12 23:56:56.921 INFO Fetch successful Sep 12 23:56:56.922823 tar[1464]: linux-arm64/LICENSE Sep 12 23:56:56.922823 tar[1464]: linux-arm64/helm Sep 12 23:56:56.926087 coreos-metadata[1445]: Sep 12 23:56:56.923 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 23:56:56.926192 update_engine[1457]: I20250912 23:56:56.924384 1457 update_check_scheduler.cc:74] Next update check in 8m29s Sep 12 23:56:56.923510 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:56:56.926495 coreos-metadata[1445]: Sep 12 23:56:56.926 INFO Fetch successful Sep 12 23:56:56.933486 jq[1479]: true Sep 12 23:56:56.971909 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 23:56:56.972027 extend-filesystems[1448]: Resized partition /dev/sda9 Sep 12 23:56:56.935217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:56:56.974075 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:56:56.995172 systemd-logind[1456]: New seat seat0. Sep 12 23:56:56.997518 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:56:56.997547 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 12 23:56:56.997796 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:56:57.086930 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:56:57.096733 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1396) Sep 12 23:56:57.087995 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:56:57.138556 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:56:57.144652 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:56:57.154281 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:56:57.156524 systemd[1]: Starting sshkeys.service... Sep 12 23:56:57.216294 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 23:56:57.212944 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:56:57.223400 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:56:57.241377 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 23:56:57.241377 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 23:56:57.241377 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 23:56:57.250084 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Sep 12 23:56:57.250084 extend-filesystems[1448]: Found sr0 Sep 12 23:56:57.246961 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:56:57.248649 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:56:57.280531 coreos-metadata[1528]: Sep 12 23:56:57.280 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 23:56:57.281633 coreos-metadata[1528]: Sep 12 23:56:57.281 INFO Fetch successful Sep 12 23:56:57.285459 unknown[1528]: wrote ssh authorized keys file for user: core Sep 12 23:56:57.306901 containerd[1466]: time="2025-09-12T23:56:57.305989840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:56:57.328289 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:56:57.330000 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:56:57.338681 systemd[1]: Finished sshkeys.service. Sep 12 23:56:57.382207 containerd[1466]: time="2025-09-12T23:56:57.382075640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.385967 containerd[1466]: time="2025-09-12T23:56:57.385909800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:57.385967 containerd[1466]: time="2025-09-12T23:56:57.385957320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:56:57.385967 containerd[1466]: time="2025-09-12T23:56:57.385976680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:56:57.386180 containerd[1466]: time="2025-09-12T23:56:57.386163480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:56:57.386207 containerd[1466]: time="2025-09-12T23:56:57.386182600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386271 containerd[1466]: time="2025-09-12T23:56:57.386248320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386271 containerd[1466]: time="2025-09-12T23:56:57.386266360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386490 containerd[1466]: time="2025-09-12T23:56:57.386464920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386520 containerd[1466]: time="2025-09-12T23:56:57.386489000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386520 containerd[1466]: time="2025-09-12T23:56:57.386505520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386520 containerd[1466]: time="2025-09-12T23:56:57.386515600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386624 containerd[1466]: time="2025-09-12T23:56:57.386589960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386881 containerd[1466]: time="2025-09-12T23:56:57.386853520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:57.386996 containerd[1466]: time="2025-09-12T23:56:57.386973280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:57.387024 containerd[1466]: time="2025-09-12T23:56:57.387000240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:56:57.387101 containerd[1466]: time="2025-09-12T23:56:57.387083000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:56:57.387146 containerd[1466]: time="2025-09-12T23:56:57.387131440Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:56:57.399810 systemd-networkd[1376]: eth0: Gained IPv6LL Sep 12 23:56:57.400958 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401258840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401370960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401390920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401406440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401421560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:56:57.401805 containerd[1466]: time="2025-09-12T23:56:57.401596440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:56:57.401931 containerd[1466]: time="2025-09-12T23:56:57.401845800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.401943920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.401970960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.401984720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.401998040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402011480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402024200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402039080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402054720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402067800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402086160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402101200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402122800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402137040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.402223 containerd[1466]: time="2025-09-12T23:56:57.402149720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402163880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402181200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402195960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402208560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402226720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402240640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402322520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402341160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402353840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402366240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402390400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402419760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402433000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406014 containerd[1466]: time="2025-09-12T23:56:57.402444160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.402557720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.402579440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.402590440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.405653920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.405686280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.405702480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.405713520Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:56:57.406263 containerd[1466]: time="2025-09-12T23:56:57.405723920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:56:57.407958 containerd[1466]: time="2025-09-12T23:56:57.406091600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:56:57.407958 containerd[1466]: time="2025-09-12T23:56:57.406161040Z" level=info msg="Connect containerd service" Sep 12 23:56:57.407958 containerd[1466]: time="2025-09-12T23:56:57.406198240Z" level=info msg="using legacy CRI server" Sep 12 23:56:57.407958 containerd[1466]: time="2025-09-12T23:56:57.406205880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:56:57.407958 containerd[1466]: time="2025-09-12T23:56:57.406455880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:56:57.413711 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:56:57.414918 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:56:57.417955 containerd[1466]: time="2025-09-12T23:56:57.417751960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418768600Z" level=info msg="Start subscribing containerd event" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418840200Z" level=info msg="Start recovering state" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418919840Z" level=info msg="Start event monitor" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418939520Z" level=info msg="Start snapshots syncer" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418948880Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:56:57.419242 containerd[1466]: time="2025-09-12T23:56:57.418956800Z" level=info msg="Start streaming server" Sep 12 23:56:57.420729 containerd[1466]: time="2025-09-12T23:56:57.419212560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:56:57.420729 containerd[1466]: time="2025-09-12T23:56:57.419943120Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:56:57.420729 containerd[1466]: time="2025-09-12T23:56:57.420007440Z" level=info msg="containerd successfully booted in 0.117219s" Sep 12 23:56:57.429078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:56:57.439273 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:56:57.442057 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:56:57.511283 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:56:57.590827 systemd-networkd[1376]: eth1: Gained IPv6LL Sep 12 23:56:57.591384 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Sep 12 23:56:57.782699 tar[1464]: linux-arm64/README.md Sep 12 23:56:57.803770 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:56:58.295078 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:56:58.320917 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:56:58.329937 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:56:58.341342 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:56:58.341734 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:56:58.352828 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:56:58.365175 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:56:58.375524 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:56:58.380082 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:56:58.381140 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:56:58.399584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:56:58.401290 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:56:58.403710 systemd[1]: Startup finished in 802ms (kernel) + 7.281s (initrd) + 4.471s (userspace) = 12.556s. Sep 12 23:56:58.413899 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:56:58.997239 kubelet[1578]: E0912 23:56:58.997118 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:56:59.000987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:56:59.001142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:09.252188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:57:09.262020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:09.379628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:09.393017 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:09.456468 kubelet[1597]: E0912 23:57:09.456402 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:09.460466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:09.460778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:19.711572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:57:19.719938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:19.865895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:19.872701 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:19.913083 kubelet[1612]: E0912 23:57:19.913008 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:19.916010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:19.916166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:27.854245 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:57:27.867745 systemd[1]: Started sshd@0-128.140.85.90:22-147.75.109.163:35780.service - OpenSSH per-connection server daemon (147.75.109.163:35780). Sep 12 23:57:27.868147 systemd-timesyncd[1355]: Contacted time server 144.76.66.157:123 (2.flatcar.pool.ntp.org). Sep 12 23:57:27.868200 systemd-timesyncd[1355]: Initial clock synchronization to Fri 2025-09-12 23:57:27.656436 UTC. Sep 12 23:57:28.825276 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 35780 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:28.827448 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:28.842391 systemd-logind[1456]: New session 1 of user core. Sep 12 23:57:28.842451 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:57:28.854128 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:57:28.867949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:57:28.882985 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:57:28.886974 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:57:28.996743 systemd[1624]: Queued start job for default target default.target. Sep 12 23:57:29.006445 systemd[1624]: Created slice app.slice - User Application Slice. Sep 12 23:57:29.006501 systemd[1624]: Reached target paths.target - Paths. Sep 12 23:57:29.006530 systemd[1624]: Reached target timers.target - Timers. Sep 12 23:57:29.010833 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:57:29.022801 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:57:29.022929 systemd[1624]: Reached target sockets.target - Sockets. Sep 12 23:57:29.022942 systemd[1624]: Reached target basic.target - Basic System. Sep 12 23:57:29.022989 systemd[1624]: Reached target default.target - Main User Target. Sep 12 23:57:29.023018 systemd[1624]: Startup finished in 128ms. Sep 12 23:57:29.023722 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:57:29.034982 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:57:29.716233 systemd[1]: Started sshd@1-128.140.85.90:22-147.75.109.163:35782.service - OpenSSH per-connection server daemon (147.75.109.163:35782). Sep 12 23:57:30.166818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:57:30.177959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:30.302017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:30.307497 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:30.354432 kubelet[1645]: E0912 23:57:30.354371 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:30.357467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:30.357832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:30.669847 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 35782 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:30.672842 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:30.679841 systemd-logind[1456]: New session 2 of user core. Sep 12 23:57:30.685557 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:57:31.335076 sshd[1635]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:31.338687 systemd[1]: sshd@1-128.140.85.90:22-147.75.109.163:35782.service: Deactivated successfully. Sep 12 23:57:31.340484 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:57:31.344096 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:57:31.345433 systemd-logind[1456]: Removed session 2. Sep 12 23:57:31.508158 systemd[1]: Started sshd@2-128.140.85.90:22-147.75.109.163:48958.service - OpenSSH per-connection server daemon (147.75.109.163:48958). Sep 12 23:57:32.473145 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 48958 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:32.475795 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:32.482401 systemd-logind[1456]: New session 3 of user core. Sep 12 23:57:32.493142 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:57:33.141029 sshd[1657]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:33.147021 systemd[1]: sshd@2-128.140.85.90:22-147.75.109.163:48958.service: Deactivated successfully. Sep 12 23:57:33.149733 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:57:33.151137 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:57:33.152740 systemd-logind[1456]: Removed session 3. Sep 12 23:57:33.316063 systemd[1]: Started sshd@3-128.140.85.90:22-147.75.109.163:48966.service - OpenSSH per-connection server daemon (147.75.109.163:48966). Sep 12 23:57:34.289547 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 48966 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:34.291185 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:34.297119 systemd-logind[1456]: New session 4 of user core. Sep 12 23:57:34.310918 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:57:34.967981 sshd[1664]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:34.973390 systemd[1]: sshd@3-128.140.85.90:22-147.75.109.163:48966.service: Deactivated successfully. Sep 12 23:57:34.975553 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:57:34.976448 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:57:34.978951 systemd-logind[1456]: Removed session 4. Sep 12 23:57:35.143063 systemd[1]: Started sshd@4-128.140.85.90:22-147.75.109.163:48970.service - OpenSSH per-connection server daemon (147.75.109.163:48970). Sep 12 23:57:36.132558 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 48970 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:36.136287 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:36.143557 systemd-logind[1456]: New session 5 of user core. Sep 12 23:57:36.153917 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:57:36.670264 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:57:36.670586 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:36.691183 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:36.856002 sshd[1671]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:36.863050 systemd[1]: sshd@4-128.140.85.90:22-147.75.109.163:48970.service: Deactivated successfully. Sep 12 23:57:36.866234 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:57:36.868729 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:57:36.870223 systemd-logind[1456]: Removed session 5. Sep 12 23:57:37.036918 systemd[1]: Started sshd@5-128.140.85.90:22-147.75.109.163:48980.service - OpenSSH per-connection server daemon (147.75.109.163:48980). Sep 12 23:57:38.027449 sshd[1679]: Accepted publickey for core from 147.75.109.163 port 48980 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:38.029723 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:38.036691 systemd-logind[1456]: New session 6 of user core. Sep 12 23:57:38.040890 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:57:38.551374 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:57:38.551737 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:38.556332 sudo[1683]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:38.562288 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:57:38.562806 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:38.588426 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:57:38.590842 auditctl[1686]: No rules Sep 12 23:57:38.591479 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:57:38.591843 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:57:38.599370 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:57:38.628019 augenrules[1704]: No rules Sep 12 23:57:38.629734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:57:38.631447 sudo[1682]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:38.792051 sshd[1679]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:38.797762 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:57:38.798146 systemd[1]: sshd@5-128.140.85.90:22-147.75.109.163:48980.service: Deactivated successfully. Sep 12 23:57:38.800906 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:57:38.803955 systemd-logind[1456]: Removed session 6. Sep 12 23:57:38.967353 systemd[1]: Started sshd@6-128.140.85.90:22-147.75.109.163:48988.service - OpenSSH per-connection server daemon (147.75.109.163:48988). Sep 12 23:57:39.930173 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 48988 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:39.932125 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:39.937070 systemd-logind[1456]: New session 7 of user core. Sep 12 23:57:39.944498 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:57:40.445217 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:57:40.445968 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:40.447020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 23:57:40.453859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:40.621837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:40.632256 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:40.678296 kubelet[1733]: E0912 23:57:40.677644 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:40.681682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:40.681834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:40.809325 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:57:40.810324 (dockerd)[1745]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:57:41.068648 dockerd[1745]: time="2025-09-12T23:57:41.068445843Z" level=info msg="Starting up" Sep 12 23:57:41.165657 dockerd[1745]: time="2025-09-12T23:57:41.165312677Z" level=info msg="Loading containers: start." Sep 12 23:57:41.276842 kernel: Initializing XFRM netlink socket Sep 12 23:57:41.364735 systemd-networkd[1376]: docker0: Link UP Sep 12 23:57:41.382905 dockerd[1745]: time="2025-09-12T23:57:41.382831884Z" level=info msg="Loading containers: done." Sep 12 23:57:41.399166 dockerd[1745]: time="2025-09-12T23:57:41.399072394Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:57:41.399333 dockerd[1745]: time="2025-09-12T23:57:41.399192312Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:57:41.399333 dockerd[1745]: time="2025-09-12T23:57:41.399305343Z" level=info msg="Daemon has completed initialization" Sep 12 23:57:41.442746 dockerd[1745]: time="2025-09-12T23:57:41.441445052Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:57:41.441766 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:57:41.849307 update_engine[1457]: I20250912 23:57:41.848637 1457 update_attempter.cc:509] Updating boot flags... Sep 12 23:57:41.904676 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1891) Sep 12 23:57:42.484091 containerd[1466]: time="2025-09-12T23:57:42.483649960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 23:57:43.199084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894745356.mount: Deactivated successfully. Sep 12 23:57:44.496481 containerd[1466]: time="2025-09-12T23:57:44.496427456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.499451 containerd[1466]: time="2025-09-12T23:57:44.499391373Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390326" Sep 12 23:57:44.501056 containerd[1466]: time="2025-09-12T23:57:44.500991361Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.505656 containerd[1466]: time="2025-09-12T23:57:44.504839388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.507658 containerd[1466]: time="2025-09-12T23:57:44.506692185Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.022995364s" Sep 12 23:57:44.507658 containerd[1466]: time="2025-09-12T23:57:44.506750522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 23:57:44.509434 containerd[1466]: time="2025-09-12T23:57:44.509395597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 23:57:46.321761 containerd[1466]: time="2025-09-12T23:57:46.321701744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:46.323299 containerd[1466]: time="2025-09-12T23:57:46.323257336Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547937" Sep 12 23:57:46.325638 containerd[1466]: time="2025-09-12T23:57:46.324004705Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:46.327385 containerd[1466]: time="2025-09-12T23:57:46.327338796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:46.328838 containerd[1466]: time="2025-09-12T23:57:46.328799934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.81915499s" Sep 12 23:57:46.328955 containerd[1466]: time="2025-09-12T23:57:46.328938562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 23:57:46.330015 containerd[1466]: time="2025-09-12T23:57:46.329973721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 23:57:48.080848 containerd[1466]: time="2025-09-12T23:57:48.080782995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.082410 containerd[1466]: time="2025-09-12T23:57:48.082359424Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295997" Sep 12 23:57:48.084090 containerd[1466]: time="2025-09-12T23:57:48.083529479Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.087059 containerd[1466]: time="2025-09-12T23:57:48.086999037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.088433 containerd[1466]: time="2025-09-12T23:57:48.088387012Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.75837789s" Sep 12 23:57:48.088433 containerd[1466]: time="2025-09-12T23:57:48.088430093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 23:57:48.089464 containerd[1466]: time="2025-09-12T23:57:48.089438404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 23:57:49.198318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531201600.mount: Deactivated successfully. Sep 12 23:57:49.517193 containerd[1466]: time="2025-09-12T23:57:49.517103615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.518502 containerd[1466]: time="2025-09-12T23:57:49.518435719Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240132" Sep 12 23:57:49.519808 containerd[1466]: time="2025-09-12T23:57:49.519764747Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.521908 containerd[1466]: time="2025-09-12T23:57:49.521839978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.523161 containerd[1466]: time="2025-09-12T23:57:49.522770605Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.433063691s" Sep 12 23:57:49.523161 containerd[1466]: time="2025-09-12T23:57:49.522808344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 23:57:49.523469 containerd[1466]: time="2025-09-12T23:57:49.523445522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 23:57:50.159412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626621960.mount: Deactivated successfully. Sep 12 23:57:50.716239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 23:57:50.725739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:50.874353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:50.891216 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:50.936112 kubelet[2027]: E0912 23:57:50.936039 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:50.939343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:50.939504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:51.438777 containerd[1466]: time="2025-09-12T23:57:51.438711846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.440569 containerd[1466]: time="2025-09-12T23:57:51.440521105Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 12 23:57:51.441553 containerd[1466]: time="2025-09-12T23:57:51.441043503Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.445076 containerd[1466]: time="2025-09-12T23:57:51.445032087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.446652 containerd[1466]: time="2025-09-12T23:57:51.446572356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.922741766s" Sep 12 23:57:51.446652 containerd[1466]: time="2025-09-12T23:57:51.446652378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 23:57:51.448210 containerd[1466]: time="2025-09-12T23:57:51.448185895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:57:51.979956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338894728.mount: Deactivated successfully. Sep 12 23:57:51.989481 containerd[1466]: time="2025-09-12T23:57:51.988364764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.990637 containerd[1466]: time="2025-09-12T23:57:51.990570337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 12 23:57:51.992532 containerd[1466]: time="2025-09-12T23:57:51.992451467Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.998569 containerd[1466]: time="2025-09-12T23:57:51.998134051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.999182 containerd[1466]: time="2025-09-12T23:57:51.999142652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.814213ms" Sep 12 23:57:51.999182 containerd[1466]: time="2025-09-12T23:57:51.999178409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:57:52.000029 containerd[1466]: time="2025-09-12T23:57:51.999996364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 23:57:52.551708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537124760.mount: Deactivated successfully. Sep 12 23:57:55.076639 containerd[1466]: time="2025-09-12T23:57:55.075021699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:55.077210 containerd[1466]: time="2025-09-12T23:57:55.076762886Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465913" Sep 12 23:57:55.077698 containerd[1466]: time="2025-09-12T23:57:55.077667995Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:55.081472 containerd[1466]: time="2025-09-12T23:57:55.081422375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:55.083058 containerd[1466]: time="2025-09-12T23:57:55.083014430Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.082980425s" Sep 12 23:57:55.083209 containerd[1466]: time="2025-09-12T23:57:55.083187905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 23:58:00.965626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 12 23:58:00.978512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:01.113789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:01.125251 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:58:01.180090 kubelet[2126]: E0912 23:58:01.180021 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:58:01.183896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:58:01.184042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:58:02.397473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:02.405180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:02.452415 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-7.scope)... Sep 12 23:58:02.452438 systemd[1]: Reloading... Sep 12 23:58:02.570503 zram_generator::config[2177]: No configuration found. Sep 12 23:58:02.705099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:02.779004 systemd[1]: Reloading finished in 326 ms. Sep 12 23:58:02.843679 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:58:02.843824 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:58:02.844216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:02.857300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:02.992120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:02.998088 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:58:03.048657 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:03.048657 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:58:03.048657 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:03.048657 kubelet[2229]: I0912 23:58:03.048155 2229 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:58:05.065080 kubelet[2229]: I0912 23:58:05.065004 2229 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:58:05.065080 kubelet[2229]: I0912 23:58:05.065046 2229 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:58:05.065883 kubelet[2229]: I0912 23:58:05.065336 2229 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:58:05.097167 kubelet[2229]: E0912 23:58:05.097091 2229 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://128.140.85.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 23:58:05.097319 kubelet[2229]: I0912 23:58:05.097288 2229 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:58:05.114646 kubelet[2229]: E0912 23:58:05.113644 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:58:05.114646 kubelet[2229]: I0912 23:58:05.113706 2229 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:58:05.116107 kubelet[2229]: I0912 23:58:05.116083 2229 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:58:05.117186 kubelet[2229]: I0912 23:58:05.117111 2229 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:58:05.117358 kubelet[2229]: I0912 23:58:05.117173 2229 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-f526684106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:58:05.117453 kubelet[2229]: I0912 23:58:05.117411 2229 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:58:05.117453 kubelet[2229]: I0912 23:58:05.117420 2229 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:58:05.117667 kubelet[2229]: I0912 23:58:05.117635 2229 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:05.121983 kubelet[2229]: I0912 23:58:05.121927 2229 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:58:05.121983 kubelet[2229]: I0912 23:58:05.121965 2229 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:58:05.121983 kubelet[2229]: I0912 23:58:05.121991 2229 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:58:05.123929 kubelet[2229]: I0912 23:58:05.122006 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:58:05.128525 kubelet[2229]: E0912 23:58:05.128472 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://128.140.85.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 23:58:05.129144 kubelet[2229]: E0912 23:58:05.129103 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://128.140.85.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-f526684106&limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:58:05.129265 kubelet[2229]: I0912 23:58:05.129244 2229 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:58:05.130115 kubelet[2229]: I0912 23:58:05.130085 2229 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:58:05.130236 kubelet[2229]: W0912 23:58:05.130219 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:58:05.135520 kubelet[2229]: I0912 23:58:05.135436 2229 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:58:05.135520 kubelet[2229]: I0912 23:58:05.135482 2229 server.go:1289] "Started kubelet" Sep 12 23:58:05.137271 kubelet[2229]: I0912 23:58:05.137237 2229 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:58:05.138319 kubelet[2229]: I0912 23:58:05.138294 2229 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:58:05.141176 kubelet[2229]: I0912 23:58:05.140270 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:58:05.141176 kubelet[2229]: I0912 23:58:05.140653 2229 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:58:05.142230 kubelet[2229]: E0912 23:58:05.140819 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.85.90:6443/api/v1/namespaces/default/events\": dial tcp 128.140.85.90:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-f526684106.1864ae60bf92318d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-f526684106,UID:ci-4081-3-5-n-f526684106,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f526684106,},FirstTimestamp:2025-09-12 23:58:05.135458701 +0000 UTC m=+2.131300432,LastTimestamp:2025-09-12 23:58:05.135458701 +0000 UTC m=+2.131300432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f526684106,}" Sep 12 23:58:05.145914 kubelet[2229]: I0912 23:58:05.144465 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:58:05.145914 kubelet[2229]: I0912 23:58:05.144855 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:58:05.148992 kubelet[2229]: E0912 23:58:05.148968 2229 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:58:05.149411 kubelet[2229]: E0912 23:58:05.149391 2229 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-f526684106\" not found" Sep 12 23:58:05.149547 kubelet[2229]: I0912 23:58:05.149535 2229 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:58:05.149878 kubelet[2229]: I0912 23:58:05.149856 2229 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:58:05.150015 kubelet[2229]: I0912 23:58:05.150005 2229 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:58:05.150631 kubelet[2229]: E0912 23:58:05.150588 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.85.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:58:05.151342 kubelet[2229]: E0912 23:58:05.151307 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.85.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f526684106?timeout=10s\": dial tcp 128.140.85.90:6443: connect: connection refused" interval="200ms" Sep 12 23:58:05.151731 kubelet[2229]: I0912 23:58:05.151704 2229 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:58:05.153304 kubelet[2229]: I0912 23:58:05.153284 2229 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:58:05.153387 kubelet[2229]: I0912 23:58:05.153377 2229 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:58:05.170680 kubelet[2229]: I0912 23:58:05.170136 2229 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:58:05.176346 kubelet[2229]: I0912 23:58:05.176303 2229 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:58:05.176346 kubelet[2229]: I0912 23:58:05.176335 2229 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:58:05.176493 kubelet[2229]: I0912 23:58:05.176357 2229 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:58:05.176493 kubelet[2229]: I0912 23:58:05.176363 2229 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:58:05.176493 kubelet[2229]: E0912 23:58:05.176397 2229 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:58:05.178948 kubelet[2229]: E0912 23:58:05.178912 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://128.140.85.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:58:05.179185 kubelet[2229]: I0912 23:58:05.179169 2229 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:58:05.179267 kubelet[2229]: I0912 23:58:05.179256 2229 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:58:05.179338 kubelet[2229]: I0912 23:58:05.179327 2229 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:05.181897 kubelet[2229]: I0912 23:58:05.181860 2229 policy_none.go:49] "None policy: Start" Sep 12 23:58:05.182108 kubelet[2229]: I0912 23:58:05.182089 2229 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:58:05.182230 kubelet[2229]: I0912 23:58:05.182215 2229 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:58:05.190023 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:58:05.207289 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:58:05.212682 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:58:05.224498 kubelet[2229]: E0912 23:58:05.224172 2229 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:58:05.224705 kubelet[2229]: I0912 23:58:05.224514 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:58:05.224705 kubelet[2229]: I0912 23:58:05.224536 2229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:58:05.226425 kubelet[2229]: I0912 23:58:05.225123 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:58:05.227467 kubelet[2229]: E0912 23:58:05.227290 2229 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:58:05.227467 kubelet[2229]: E0912 23:58:05.227354 2229 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-f526684106\" not found" Sep 12 23:58:05.292561 systemd[1]: Created slice kubepods-burstable-podd51acb427c8724bde6024fb2c915f2fd.slice - libcontainer container kubepods-burstable-podd51acb427c8724bde6024fb2c915f2fd.slice. Sep 12 23:58:05.314296 kubelet[2229]: E0912 23:58:05.313928 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.319528 systemd[1]: Created slice kubepods-burstable-pod2f14af745af266b1493306e3660df8b2.slice - libcontainer container kubepods-burstable-pod2f14af745af266b1493306e3660df8b2.slice. Sep 12 23:58:05.328943 kubelet[2229]: I0912 23:58:05.328908 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.330252 kubelet[2229]: E0912 23:58:05.330217 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.85.90:6443/api/v1/nodes\": dial tcp 128.140.85.90:6443: connect: connection refused" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.330748 kubelet[2229]: E0912 23:58:05.330547 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.332576 systemd[1]: Created slice kubepods-burstable-podb98d27e81c49f0100411d75ed71e64a9.slice - libcontainer container kubepods-burstable-podb98d27e81c49f0100411d75ed71e64a9.slice. Sep 12 23:58:05.335096 kubelet[2229]: E0912 23:58:05.335058 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351000 kubelet[2229]: I0912 23:58:05.350917 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b98d27e81c49f0100411d75ed71e64a9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-f526684106\" (UID: \"b98d27e81c49f0100411d75ed71e64a9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351204 kubelet[2229]: I0912 23:58:05.351124 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351204 kubelet[2229]: I0912 23:58:05.351185 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351330 kubelet[2229]: I0912 23:58:05.351228 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351330 kubelet[2229]: I0912 23:58:05.351268 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351330 kubelet[2229]: I0912 23:58:05.351305 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351487 kubelet[2229]: I0912 23:58:05.351341 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351487 kubelet[2229]: I0912 23:58:05.351375 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.351487 kubelet[2229]: I0912 23:58:05.351408 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:05.352153 kubelet[2229]: E0912 23:58:05.352084 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.85.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f526684106?timeout=10s\": dial tcp 128.140.85.90:6443: connect: connection refused" interval="400ms" Sep 12 23:58:05.533849 kubelet[2229]: I0912 23:58:05.533788 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.534174 kubelet[2229]: E0912 23:58:05.534148 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.85.90:6443/api/v1/nodes\": dial tcp 128.140.85.90:6443: connect: connection refused" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.616161 containerd[1466]: time="2025-09-12T23:58:05.615961187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-f526684106,Uid:d51acb427c8724bde6024fb2c915f2fd,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:05.632999 containerd[1466]: time="2025-09-12T23:58:05.632919861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-f526684106,Uid:2f14af745af266b1493306e3660df8b2,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:05.636387 containerd[1466]: time="2025-09-12T23:58:05.636291029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-f526684106,Uid:b98d27e81c49f0100411d75ed71e64a9,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:05.753044 kubelet[2229]: E0912 23:58:05.752941 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.85.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f526684106?timeout=10s\": dial tcp 128.140.85.90:6443: connect: connection refused" interval="800ms" Sep 12 23:58:05.938286 kubelet[2229]: I0912 23:58:05.937712 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:05.938286 kubelet[2229]: E0912 23:58:05.938108 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.85.90:6443/api/v1/nodes\": dial tcp 128.140.85.90:6443: connect: connection refused" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:06.127650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415683879.mount: Deactivated successfully. Sep 12 23:58:06.136008 containerd[1466]: time="2025-09-12T23:58:06.135885716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:06.137506 containerd[1466]: time="2025-09-12T23:58:06.137449467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:06.139486 containerd[1466]: time="2025-09-12T23:58:06.139381406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 12 23:58:06.139486 containerd[1466]: time="2025-09-12T23:58:06.139459683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:58:06.140937 containerd[1466]: time="2025-09-12T23:58:06.140852039Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:06.143199 containerd[1466]: time="2025-09-12T23:58:06.142526506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:58:06.143199 containerd[1466]: time="2025-09-12T23:58:06.143133607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:06.148222 containerd[1466]: time="2025-09-12T23:58:06.147873938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:06.148953 containerd[1466]: time="2025-09-12T23:58:06.148913065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.529639ms" Sep 12 23:58:06.150715 containerd[1466]: time="2025-09-12T23:58:06.150663009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.640272ms" Sep 12 23:58:06.151657 containerd[1466]: time="2025-09-12T23:58:06.151531022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.408521ms" Sep 12 23:58:06.273730 containerd[1466]: time="2025-09-12T23:58:06.273531288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:06.274698 containerd[1466]: time="2025-09-12T23:58:06.274402740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:06.274698 containerd[1466]: time="2025-09-12T23:58:06.274432579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.274698 containerd[1466]: time="2025-09-12T23:58:06.274594174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.281890 kubelet[2229]: E0912 23:58:06.281786 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://128.140.85.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:58:06.282388 containerd[1466]: time="2025-09-12T23:58:06.281944462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:06.282388 containerd[1466]: time="2025-09-12T23:58:06.282006980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:06.282388 containerd[1466]: time="2025-09-12T23:58:06.282023460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.282388 containerd[1466]: time="2025-09-12T23:58:06.282103857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.284719 containerd[1466]: time="2025-09-12T23:58:06.284275948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:06.284719 containerd[1466]: time="2025-09-12T23:58:06.284441183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:06.284719 containerd[1466]: time="2025-09-12T23:58:06.284454303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.285157 containerd[1466]: time="2025-09-12T23:58:06.284955407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:06.302822 systemd[1]: Started cri-containerd-49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4.scope - libcontainer container 49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4. Sep 12 23:58:06.324811 systemd[1]: Started cri-containerd-66ee0e1433f16d6e151da0dab05189b236096352b471a0919b1f8579bb0930db.scope - libcontainer container 66ee0e1433f16d6e151da0dab05189b236096352b471a0919b1f8579bb0930db. Sep 12 23:58:06.329081 systemd[1]: Started cri-containerd-94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe.scope - libcontainer container 94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe. Sep 12 23:58:06.389812 containerd[1466]: time="2025-09-12T23:58:06.389478385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-f526684106,Uid:d51acb427c8724bde6024fb2c915f2fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ee0e1433f16d6e151da0dab05189b236096352b471a0919b1f8579bb0930db\"" Sep 12 23:58:06.392016 containerd[1466]: time="2025-09-12T23:58:06.391963466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-f526684106,Uid:b98d27e81c49f0100411d75ed71e64a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4\"" Sep 12 23:58:06.402187 containerd[1466]: time="2025-09-12T23:58:06.401882753Z" level=info msg="CreateContainer within sandbox \"66ee0e1433f16d6e151da0dab05189b236096352b471a0919b1f8579bb0930db\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:58:06.402908 containerd[1466]: time="2025-09-12T23:58:06.402867122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-f526684106,Uid:2f14af745af266b1493306e3660df8b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe\"" Sep 12 23:58:06.404027 containerd[1466]: time="2025-09-12T23:58:06.403789813Z" level=info msg="CreateContainer within sandbox \"49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:58:06.408166 containerd[1466]: time="2025-09-12T23:58:06.408120996Z" level=info msg="CreateContainer within sandbox \"94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:58:06.425794 containerd[1466]: time="2025-09-12T23:58:06.425465368Z" level=info msg="CreateContainer within sandbox \"49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad\"" Sep 12 23:58:06.426063 containerd[1466]: time="2025-09-12T23:58:06.425965792Z" level=info msg="CreateContainer within sandbox \"66ee0e1433f16d6e151da0dab05189b236096352b471a0919b1f8579bb0930db\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c22aa0c20f429399a100858106e071cf9037333ce36269b63bec6cbefda4b9b6\"" Sep 12 23:58:06.427798 containerd[1466]: time="2025-09-12T23:58:06.426890323Z" level=info msg="StartContainer for \"c22aa0c20f429399a100858106e071cf9037333ce36269b63bec6cbefda4b9b6\"" Sep 12 23:58:06.427798 containerd[1466]: time="2025-09-12T23:58:06.426934202Z" level=info msg="StartContainer for \"dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad\"" Sep 12 23:58:06.433628 containerd[1466]: time="2025-09-12T23:58:06.433534553Z" level=info msg="CreateContainer within sandbox \"94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3\"" Sep 12 23:58:06.434189 containerd[1466]: time="2025-09-12T23:58:06.434165293Z" level=info msg="StartContainer for \"1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3\"" Sep 12 23:58:06.470569 systemd[1]: Started cri-containerd-c22aa0c20f429399a100858106e071cf9037333ce36269b63bec6cbefda4b9b6.scope - libcontainer container c22aa0c20f429399a100858106e071cf9037333ce36269b63bec6cbefda4b9b6. Sep 12 23:58:06.482977 systemd[1]: Started cri-containerd-1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3.scope - libcontainer container 1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3. Sep 12 23:58:06.487531 systemd[1]: Started cri-containerd-dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad.scope - libcontainer container dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad. Sep 12 23:58:06.546751 containerd[1466]: time="2025-09-12T23:58:06.544790798Z" level=info msg="StartContainer for \"c22aa0c20f429399a100858106e071cf9037333ce36269b63bec6cbefda4b9b6\" returns successfully" Sep 12 23:58:06.555519 kubelet[2229]: E0912 23:58:06.554373 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.85.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f526684106?timeout=10s\": dial tcp 128.140.85.90:6443: connect: connection refused" interval="1.6s" Sep 12 23:58:06.569653 containerd[1466]: time="2025-09-12T23:58:06.568842599Z" level=info msg="StartContainer for \"1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3\" returns successfully" Sep 12 23:58:06.601262 containerd[1466]: time="2025-09-12T23:58:06.601121819Z" level=info msg="StartContainer for \"dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad\" returns successfully" Sep 12 23:58:06.604492 kubelet[2229]: E0912 23:58:06.604433 2229 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.85.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.85.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:58:06.742614 kubelet[2229]: I0912 23:58:06.740908 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:07.199107 kubelet[2229]: E0912 23:58:07.198955 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:07.202205 kubelet[2229]: E0912 23:58:07.202097 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:07.205695 kubelet[2229]: E0912 23:58:07.205441 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:08.207715 kubelet[2229]: E0912 23:58:08.206566 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:08.207715 kubelet[2229]: E0912 23:58:08.207016 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:09.569396 kubelet[2229]: E0912 23:58:09.569356 2229 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:10.085488 kubelet[2229]: E0912 23:58:10.085436 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-f526684106\" not found" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:10.105308 kubelet[2229]: I0912 23:58:10.105029 2229 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:10.125569 kubelet[2229]: I0912 23:58:10.125528 2229 apiserver.go:52] "Watching apiserver" Sep 12 23:58:10.150885 kubelet[2229]: I0912 23:58:10.150809 2229 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:58:10.152563 kubelet[2229]: I0912 23:58:10.151971 2229 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:10.174527 kubelet[2229]: E0912 23:58:10.174401 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-f526684106.1864ae60bf92318d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-f526684106,UID:ci-4081-3-5-n-f526684106,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f526684106,},FirstTimestamp:2025-09-12 23:58:05.135458701 +0000 UTC m=+2.131300432,LastTimestamp:2025-09-12 23:58:05.135458701 +0000 UTC m=+2.131300432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f526684106,}" Sep 12 23:58:10.236058 kubelet[2229]: E0912 23:58:10.236020 2229 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-f526684106\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:10.236439 kubelet[2229]: I0912 23:58:10.236250 2229 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:10.241478 kubelet[2229]: E0912 23:58:10.241171 2229 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-f526684106\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:10.241478 kubelet[2229]: I0912 23:58:10.241206 2229 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:10.265615 kubelet[2229]: E0912 23:58:10.263727 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-f526684106.1864ae60c0600293 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-f526684106,UID:ci-4081-3-5-n-f526684106,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f526684106,},FirstTimestamp:2025-09-12 23:58:05.148947091 +0000 UTC m=+2.144788822,LastTimestamp:2025-09-12 23:58:05.148947091 +0000 UTC m=+2.144788822,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f526684106,}" Sep 12 23:58:10.265615 kubelet[2229]: E0912 23:58:10.264018 2229 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-f526684106\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:11.617471 kubelet[2229]: I0912 23:58:11.617399 2229 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:12.467743 systemd[1]: Reloading requested from client PID 2519 ('systemctl') (unit session-7.scope)... Sep 12 23:58:12.467760 systemd[1]: Reloading... Sep 12 23:58:12.589644 zram_generator::config[2574]: No configuration found. Sep 12 23:58:12.682383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:12.772499 systemd[1]: Reloading finished in 304 ms. Sep 12 23:58:12.813369 kubelet[2229]: I0912 23:58:12.813263 2229 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:58:12.813628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:12.826666 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:58:12.828705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:12.828816 systemd[1]: kubelet.service: Consumed 2.596s CPU time, 127.2M memory peak, 0B memory swap peak. Sep 12 23:58:12.836970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:12.991923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:13.002301 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:58:13.063756 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:13.063756 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:58:13.063756 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:13.064382 kubelet[2604]: I0912 23:58:13.063737 2604 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:58:13.077233 kubelet[2604]: I0912 23:58:13.077179 2604 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:58:13.077233 kubelet[2604]: I0912 23:58:13.077216 2604 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:58:13.077524 kubelet[2604]: I0912 23:58:13.077484 2604 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:58:13.081936 kubelet[2604]: I0912 23:58:13.081736 2604 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 23:58:13.086275 kubelet[2604]: I0912 23:58:13.085979 2604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:58:13.092710 kubelet[2604]: E0912 23:58:13.092632 2604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:58:13.092710 kubelet[2604]: I0912 23:58:13.092710 2604 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:58:13.098134 kubelet[2604]: I0912 23:58:13.098095 2604 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:58:13.099483 kubelet[2604]: I0912 23:58:13.098673 2604 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:58:13.099483 kubelet[2604]: I0912 23:58:13.098811 2604 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-f526684106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:58:13.099483 kubelet[2604]: I0912 23:58:13.099177 2604 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:58:13.099483 kubelet[2604]: I0912 23:58:13.099195 2604 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:58:13.099483 kubelet[2604]: I0912 23:58:13.099255 2604 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:13.099890 kubelet[2604]: I0912 23:58:13.099873 2604 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:58:13.100713 kubelet[2604]: I0912 23:58:13.100688 2604 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:58:13.100880 kubelet[2604]: I0912 23:58:13.100870 2604 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:58:13.100951 kubelet[2604]: I0912 23:58:13.100941 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:58:13.104817 kubelet[2604]: I0912 23:58:13.104782 2604 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:58:13.105975 kubelet[2604]: I0912 23:58:13.105940 2604 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:58:13.112459 kubelet[2604]: I0912 23:58:13.111982 2604 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:58:13.112459 kubelet[2604]: I0912 23:58:13.112042 2604 server.go:1289] "Started kubelet" Sep 12 23:58:13.117790 kubelet[2604]: I0912 23:58:13.117095 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:58:13.134777 kubelet[2604]: I0912 23:58:13.134724 2604 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:58:13.135941 kubelet[2604]: I0912 23:58:13.135911 2604 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:58:13.145628 kubelet[2604]: I0912 23:58:13.143763 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:58:13.145628 kubelet[2604]: I0912 23:58:13.144000 2604 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:58:13.145628 kubelet[2604]: I0912 23:58:13.144632 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:58:13.148641 kubelet[2604]: I0912 23:58:13.148566 2604 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:58:13.152996 kubelet[2604]: I0912 23:58:13.152946 2604 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:58:13.153141 kubelet[2604]: I0912 23:58:13.153123 2604 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:58:13.155392 kubelet[2604]: I0912 23:58:13.155336 2604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:58:13.156937 kubelet[2604]: I0912 23:58:13.156901 2604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:58:13.157084 kubelet[2604]: I0912 23:58:13.157074 2604 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:58:13.157150 kubelet[2604]: I0912 23:58:13.157140 2604 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:58:13.157202 kubelet[2604]: I0912 23:58:13.157193 2604 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:58:13.157829 kubelet[2604]: E0912 23:58:13.157796 2604 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:58:13.161310 kubelet[2604]: I0912 23:58:13.161244 2604 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:58:13.161438 kubelet[2604]: I0912 23:58:13.161375 2604 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:58:13.164254 kubelet[2604]: E0912 23:58:13.164207 2604 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:58:13.167123 kubelet[2604]: I0912 23:58:13.167077 2604 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:58:13.238660 kubelet[2604]: I0912 23:58:13.238581 2604 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:58:13.238660 kubelet[2604]: I0912 23:58:13.238637 2604 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:58:13.238660 kubelet[2604]: I0912 23:58:13.238662 2604 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:13.238884 kubelet[2604]: I0912 23:58:13.238820 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:58:13.238884 kubelet[2604]: I0912 23:58:13.238831 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:58:13.238884 kubelet[2604]: I0912 23:58:13.238849 2604 policy_none.go:49] "None policy: Start" Sep 12 23:58:13.238884 kubelet[2604]: I0912 23:58:13.238859 2604 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:58:13.238884 kubelet[2604]: I0912 23:58:13.238868 2604 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:58:13.239034 kubelet[2604]: I0912 23:58:13.239028 2604 state_mem.go:75] "Updated machine memory state" Sep 12 23:58:13.244794 kubelet[2604]: E0912 23:58:13.244763 2604 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:58:13.244996 kubelet[2604]: I0912 23:58:13.244980 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:58:13.245036 kubelet[2604]: I0912 23:58:13.245002 2604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:58:13.245885 kubelet[2604]: I0912 23:58:13.245868 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:58:13.249255 kubelet[2604]: E0912 23:58:13.249050 2604 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:58:13.259045 kubelet[2604]: I0912 23:58:13.258999 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.261651 kubelet[2604]: I0912 23:58:13.259891 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.261651 kubelet[2604]: I0912 23:58:13.260028 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.275382 kubelet[2604]: E0912 23:58:13.274644 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-f526684106\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354174 kubelet[2604]: I0912 23:58:13.353995 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354174 kubelet[2604]: I0912 23:58:13.354072 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354174 kubelet[2604]: I0912 23:58:13.354115 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354174 kubelet[2604]: I0912 23:58:13.354157 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354413 kubelet[2604]: I0912 23:58:13.354191 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b98d27e81c49f0100411d75ed71e64a9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-f526684106\" (UID: \"b98d27e81c49f0100411d75ed71e64a9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354413 kubelet[2604]: I0912 23:58:13.354225 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d51acb427c8724bde6024fb2c915f2fd-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f526684106\" (UID: \"d51acb427c8724bde6024fb2c915f2fd\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354413 kubelet[2604]: I0912 23:58:13.354259 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354413 kubelet[2604]: I0912 23:58:13.354290 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.354413 kubelet[2604]: I0912 23:58:13.354327 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f14af745af266b1493306e3660df8b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-f526684106\" (UID: \"2f14af745af266b1493306e3660df8b2\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:13.357401 kubelet[2604]: I0912 23:58:13.357145 2604 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:13.373262 kubelet[2604]: I0912 23:58:13.373214 2604 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:13.373509 kubelet[2604]: I0912 23:58:13.373321 2604 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-f526684106" Sep 12 23:58:14.104635 kubelet[2604]: I0912 23:58:14.104302 2604 apiserver.go:52] "Watching apiserver" Sep 12 23:58:14.154629 kubelet[2604]: I0912 23:58:14.153292 2604 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:58:14.220394 kubelet[2604]: I0912 23:58:14.219396 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.220394 kubelet[2604]: I0912 23:58:14.219697 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.220394 kubelet[2604]: I0912 23:58:14.220176 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.229728 kubelet[2604]: E0912 23:58:14.229106 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-f526684106\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.233429 kubelet[2604]: E0912 23:58:14.233380 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-f526684106\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.234019 kubelet[2604]: E0912 23:58:14.233999 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-f526684106\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" Sep 12 23:58:14.267163 kubelet[2604]: I0912 23:58:14.267066 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f526684106" podStartSLOduration=1.266390011 podStartE2EDuration="1.266390011s" podCreationTimestamp="2025-09-12 23:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:14.252436701 +0000 UTC m=+1.244122950" watchObservedRunningTime="2025-09-12 23:58:14.266390011 +0000 UTC m=+1.258076260" Sep 12 23:58:14.282201 kubelet[2604]: I0912 23:58:14.282136 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f526684106" podStartSLOduration=3.282116443 podStartE2EDuration="3.282116443s" podCreationTimestamp="2025-09-12 23:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:14.267784182 +0000 UTC m=+1.259470471" watchObservedRunningTime="2025-09-12 23:58:14.282116443 +0000 UTC m=+1.273802692" Sep 12 23:58:20.201654 kubelet[2604]: I0912 23:58:20.201520 2604 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:58:20.202302 containerd[1466]: time="2025-09-12T23:58:20.201999609Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:58:20.202710 kubelet[2604]: I0912 23:58:20.202334 2604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:58:20.907786 kubelet[2604]: I0912 23:58:20.907027 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f526684106" podStartSLOduration=7.907008366 podStartE2EDuration="7.907008366s" podCreationTimestamp="2025-09-12 23:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:14.282904227 +0000 UTC m=+1.274590436" watchObservedRunningTime="2025-09-12 23:58:20.907008366 +0000 UTC m=+7.898694615" Sep 12 23:58:20.928627 systemd[1]: Created slice kubepods-besteffort-podad87fc33_a98b_4f4e_a3f2_e0bca0b8f6fc.slice - libcontainer container kubepods-besteffort-podad87fc33_a98b_4f4e_a3f2_e0bca0b8f6fc.slice. Sep 12 23:58:20.999443 kubelet[2604]: I0912 23:58:20.999092 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc-lib-modules\") pod \"kube-proxy-nbdc7\" (UID: \"ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc\") " pod="kube-system/kube-proxy-nbdc7" Sep 12 23:58:20.999443 kubelet[2604]: I0912 23:58:20.999250 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc-kube-proxy\") pod \"kube-proxy-nbdc7\" (UID: \"ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc\") " pod="kube-system/kube-proxy-nbdc7" Sep 12 23:58:20.999443 kubelet[2604]: I0912 23:58:20.999302 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc-xtables-lock\") pod \"kube-proxy-nbdc7\" (UID: \"ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc\") " pod="kube-system/kube-proxy-nbdc7" Sep 12 23:58:20.999443 kubelet[2604]: I0912 23:58:20.999371 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbd5\" (UniqueName: \"kubernetes.io/projected/ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc-kube-api-access-nsbd5\") pod \"kube-proxy-nbdc7\" (UID: \"ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc\") " pod="kube-system/kube-proxy-nbdc7" Sep 12 23:58:21.062617 systemd[1]: Created slice kubepods-besteffort-poda03b4e1d_d480_4b44_bca6_9a0fedb3c30b.slice - libcontainer container kubepods-besteffort-poda03b4e1d_d480_4b44_bca6_9a0fedb3c30b.slice. Sep 12 23:58:21.101210 kubelet[2604]: I0912 23:58:21.099876 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a03b4e1d-d480-4b44-bca6-9a0fedb3c30b-var-lib-calico\") pod \"tigera-operator-755d956888-68fxl\" (UID: \"a03b4e1d-d480-4b44-bca6-9a0fedb3c30b\") " pod="tigera-operator/tigera-operator-755d956888-68fxl" Sep 12 23:58:21.101210 kubelet[2604]: I0912 23:58:21.099943 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqs9\" (UniqueName: \"kubernetes.io/projected/a03b4e1d-d480-4b44-bca6-9a0fedb3c30b-kube-api-access-7vqs9\") pod \"tigera-operator-755d956888-68fxl\" (UID: \"a03b4e1d-d480-4b44-bca6-9a0fedb3c30b\") " pod="tigera-operator/tigera-operator-755d956888-68fxl" Sep 12 23:58:21.241052 containerd[1466]: time="2025-09-12T23:58:21.241007398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbdc7,Uid:ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:21.271196 containerd[1466]: time="2025-09-12T23:58:21.270836069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:21.271196 containerd[1466]: time="2025-09-12T23:58:21.270898948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:21.271196 containerd[1466]: time="2025-09-12T23:58:21.270911508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:21.271196 containerd[1466]: time="2025-09-12T23:58:21.270998867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:21.303867 systemd[1]: Started cri-containerd-a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c.scope - libcontainer container a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c. Sep 12 23:58:21.336369 containerd[1466]: time="2025-09-12T23:58:21.335575815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbdc7,Uid:ad87fc33-a98b-4f4e-a3f2-e0bca0b8f6fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c\"" Sep 12 23:58:21.343227 containerd[1466]: time="2025-09-12T23:58:21.343105102Z" level=info msg="CreateContainer within sandbox \"a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:58:21.360416 containerd[1466]: time="2025-09-12T23:58:21.360348322Z" level=info msg="CreateContainer within sandbox \"a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f71ff4771e7f3069438248f38bd17d8d08b093aeef1de445e10e84b07b6828c\"" Sep 12 23:58:21.361962 containerd[1466]: time="2025-09-12T23:58:21.361883579Z" level=info msg="StartContainer for \"8f71ff4771e7f3069438248f38bd17d8d08b093aeef1de445e10e84b07b6828c\"" Sep 12 23:58:21.368184 containerd[1466]: time="2025-09-12T23:58:21.368132085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-68fxl,Uid:a03b4e1d-d480-4b44-bca6-9a0fedb3c30b,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:58:21.392861 systemd[1]: Started cri-containerd-8f71ff4771e7f3069438248f38bd17d8d08b093aeef1de445e10e84b07b6828c.scope - libcontainer container 8f71ff4771e7f3069438248f38bd17d8d08b093aeef1de445e10e84b07b6828c. Sep 12 23:58:21.408843 containerd[1466]: time="2025-09-12T23:58:21.408403359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:21.408843 containerd[1466]: time="2025-09-12T23:58:21.408466839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:21.408843 containerd[1466]: time="2025-09-12T23:58:21.408482838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:21.408843 containerd[1466]: time="2025-09-12T23:58:21.408568957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:21.432928 systemd[1]: Started cri-containerd-d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab.scope - libcontainer container d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab. Sep 12 23:58:21.438553 containerd[1466]: time="2025-09-12T23:58:21.438455587Z" level=info msg="StartContainer for \"8f71ff4771e7f3069438248f38bd17d8d08b093aeef1de445e10e84b07b6828c\" returns successfully" Sep 12 23:58:21.490291 containerd[1466]: time="2025-09-12T23:58:21.490220249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-68fxl,Uid:a03b4e1d-d480-4b44-bca6-9a0fedb3c30b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab\"" Sep 12 23:58:21.495147 containerd[1466]: time="2025-09-12T23:58:21.494941578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:58:22.122572 systemd[1]: run-containerd-runc-k8s.io-a164e9a757d231d6f264e449bd01b4534fa7965a14005fb23ef6b9ef6f3bc99c-runc.78FUT9.mount: Deactivated successfully. Sep 12 23:58:23.111221 kubelet[2604]: I0912 23:58:23.110845 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbdc7" podStartSLOduration=3.110824796 podStartE2EDuration="3.110824796s" podCreationTimestamp="2025-09-12 23:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:22.260775061 +0000 UTC m=+9.252461390" watchObservedRunningTime="2025-09-12 23:58:23.110824796 +0000 UTC m=+10.102511045" Sep 12 23:58:23.345896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824001540.mount: Deactivated successfully. Sep 12 23:58:23.762442 containerd[1466]: time="2025-09-12T23:58:23.762385272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:23.764430 containerd[1466]: time="2025-09-12T23:58:23.764370125Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:58:23.765981 containerd[1466]: time="2025-09-12T23:58:23.765845945Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:23.770532 containerd[1466]: time="2025-09-12T23:58:23.770130725Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:23.771201 containerd[1466]: time="2025-09-12T23:58:23.770783836Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.275753261s" Sep 12 23:58:23.771201 containerd[1466]: time="2025-09-12T23:58:23.770820156Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:58:23.777908 containerd[1466]: time="2025-09-12T23:58:23.777839139Z" level=info msg="CreateContainer within sandbox \"d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:58:23.793225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678804321.mount: Deactivated successfully. Sep 12 23:58:23.798806 containerd[1466]: time="2025-09-12T23:58:23.798558653Z" level=info msg="CreateContainer within sandbox \"d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc\"" Sep 12 23:58:23.800522 containerd[1466]: time="2025-09-12T23:58:23.800483346Z" level=info msg="StartContainer for \"ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc\"" Sep 12 23:58:23.843178 systemd[1]: Started cri-containerd-ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc.scope - libcontainer container ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc. Sep 12 23:58:23.872831 containerd[1466]: time="2025-09-12T23:58:23.872782707Z" level=info msg="StartContainer for \"ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc\" returns successfully" Sep 12 23:58:28.394788 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:28.554199 sshd[1712]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:28.559295 systemd[1]: sshd@6-128.140.85.90:22-147.75.109.163:48988.service: Deactivated successfully. Sep 12 23:58:28.563400 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:58:28.567649 systemd[1]: session-7.scope: Consumed 9.112s CPU time, 157.6M memory peak, 0B memory swap peak. Sep 12 23:58:28.568890 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:58:28.573626 systemd-logind[1456]: Removed session 7. Sep 12 23:58:37.161305 kubelet[2604]: I0912 23:58:37.160790 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-68fxl" podStartSLOduration=14.881667212 podStartE2EDuration="17.160760224s" podCreationTimestamp="2025-09-12 23:58:20 +0000 UTC" firstStartedPulling="2025-09-12 23:58:21.492899728 +0000 UTC m=+8.484585977" lastFinishedPulling="2025-09-12 23:58:23.77199274 +0000 UTC m=+10.763678989" observedRunningTime="2025-09-12 23:58:24.270109887 +0000 UTC m=+11.261796176" watchObservedRunningTime="2025-09-12 23:58:37.160760224 +0000 UTC m=+24.152446473" Sep 12 23:58:37.179505 systemd[1]: Created slice kubepods-besteffort-podfcfcb340_3634_4cdc_b6bb_22e3fdc466f5.slice - libcontainer container kubepods-besteffort-podfcfcb340_3634_4cdc_b6bb_22e3fdc466f5.slice. Sep 12 23:58:37.202957 kubelet[2604]: I0912 23:58:37.202872 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcfcb340-3634-4cdc-b6bb-22e3fdc466f5-tigera-ca-bundle\") pod \"calico-typha-5d48479895-hp2vx\" (UID: \"fcfcb340-3634-4cdc-b6bb-22e3fdc466f5\") " pod="calico-system/calico-typha-5d48479895-hp2vx" Sep 12 23:58:37.202957 kubelet[2604]: I0912 23:58:37.202917 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fcfcb340-3634-4cdc-b6bb-22e3fdc466f5-typha-certs\") pod \"calico-typha-5d48479895-hp2vx\" (UID: \"fcfcb340-3634-4cdc-b6bb-22e3fdc466f5\") " pod="calico-system/calico-typha-5d48479895-hp2vx" Sep 12 23:58:37.202957 kubelet[2604]: I0912 23:58:37.202937 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2hs\" (UniqueName: \"kubernetes.io/projected/fcfcb340-3634-4cdc-b6bb-22e3fdc466f5-kube-api-access-jk2hs\") pod \"calico-typha-5d48479895-hp2vx\" (UID: \"fcfcb340-3634-4cdc-b6bb-22e3fdc466f5\") " pod="calico-system/calico-typha-5d48479895-hp2vx" Sep 12 23:58:37.364332 systemd[1]: Created slice kubepods-besteffort-pod0449d45e_a751_47d3_9d3e_5a131b829a67.slice - libcontainer container kubepods-besteffort-pod0449d45e_a751_47d3_9d3e_5a131b829a67.slice. Sep 12 23:58:37.404980 kubelet[2604]: I0912 23:58:37.404935 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-policysync\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.404980 kubelet[2604]: I0912 23:58:37.404981 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-cni-log-dir\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405234 kubelet[2604]: I0912 23:58:37.405000 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-xtables-lock\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405234 kubelet[2604]: I0912 23:58:37.405016 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7xg\" (UniqueName: \"kubernetes.io/projected/0449d45e-a751-47d3-9d3e-5a131b829a67-kube-api-access-jj7xg\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405234 kubelet[2604]: I0912 23:58:37.405039 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-cni-net-dir\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405234 kubelet[2604]: I0912 23:58:37.405054 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0449d45e-a751-47d3-9d3e-5a131b829a67-tigera-ca-bundle\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405234 kubelet[2604]: I0912 23:58:37.405079 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-cni-bin-dir\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405346 kubelet[2604]: I0912 23:58:37.405108 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-lib-modules\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405346 kubelet[2604]: I0912 23:58:37.405129 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-var-lib-calico\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405346 kubelet[2604]: I0912 23:58:37.405145 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-var-run-calico\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405346 kubelet[2604]: I0912 23:58:37.405162 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0449d45e-a751-47d3-9d3e-5a131b829a67-flexvol-driver-host\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.405346 kubelet[2604]: I0912 23:58:37.405179 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0449d45e-a751-47d3-9d3e-5a131b829a67-node-certs\") pod \"calico-node-rswvq\" (UID: \"0449d45e-a751-47d3-9d3e-5a131b829a67\") " pod="calico-system/calico-node-rswvq" Sep 12 23:58:37.486879 containerd[1466]: time="2025-09-12T23:58:37.486830766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d48479895-hp2vx,Uid:fcfcb340-3634-4cdc-b6bb-22e3fdc466f5,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.515668 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.521166 kubelet[2604]: W0912 23:58:37.515707 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.515737 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.517555 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.521166 kubelet[2604]: W0912 23:58:37.517593 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.517674 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.518849 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.521166 kubelet[2604]: W0912 23:58:37.519806 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.519852 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.521166 kubelet[2604]: E0912 23:58:37.520920 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.521497 kubelet[2604]: W0912 23:58:37.520937 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.525629 kubelet[2604]: E0912 23:58:37.523652 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.525629 kubelet[2604]: E0912 23:58:37.524780 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.525629 kubelet[2604]: W0912 23:58:37.524801 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.525629 kubelet[2604]: E0912 23:58:37.524826 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.525629 kubelet[2604]: E0912 23:58:37.525475 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.525629 kubelet[2604]: W0912 23:58:37.525490 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.526704 kubelet[2604]: E0912 23:58:37.525505 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.540683 containerd[1466]: time="2025-09-12T23:58:37.539020561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:37.540683 containerd[1466]: time="2025-09-12T23:58:37.539195960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:37.540683 containerd[1466]: time="2025-09-12T23:58:37.539223679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:37.540683 containerd[1466]: time="2025-09-12T23:58:37.539333639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:37.558556 kubelet[2604]: E0912 23:58:37.558351 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.559646 kubelet[2604]: W0912 23:58:37.558908 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.559646 kubelet[2604]: E0912 23:58:37.558974 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.567840 kubelet[2604]: E0912 23:58:37.567673 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:37.587852 systemd[1]: Started cri-containerd-1705ddfbb8e9ffd62e8bc10a5004c53b9767ae016a441dc65f433c77dfe37117.scope - libcontainer container 1705ddfbb8e9ffd62e8bc10a5004c53b9767ae016a441dc65f433c77dfe37117. Sep 12 23:58:37.605271 kubelet[2604]: E0912 23:58:37.605169 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.605271 kubelet[2604]: W0912 23:58:37.605195 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.605271 kubelet[2604]: E0912 23:58:37.605218 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.606761 kubelet[2604]: E0912 23:58:37.606534 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.606761 kubelet[2604]: W0912 23:58:37.606558 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.606761 kubelet[2604]: E0912 23:58:37.606655 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.608038 kubelet[2604]: E0912 23:58:37.607848 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.608038 kubelet[2604]: W0912 23:58:37.607868 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.608038 kubelet[2604]: E0912 23:58:37.607886 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.608972 kubelet[2604]: E0912 23:58:37.608815 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.608972 kubelet[2604]: W0912 23:58:37.608852 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.608972 kubelet[2604]: E0912 23:58:37.608873 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.609818 kubelet[2604]: E0912 23:58:37.609691 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.609818 kubelet[2604]: W0912 23:58:37.609706 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.609818 kubelet[2604]: E0912 23:58:37.609720 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.610802 kubelet[2604]: E0912 23:58:37.610677 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.610802 kubelet[2604]: W0912 23:58:37.610693 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.610802 kubelet[2604]: E0912 23:58:37.610707 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.611138 kubelet[2604]: E0912 23:58:37.611018 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.611138 kubelet[2604]: W0912 23:58:37.611031 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.611138 kubelet[2604]: E0912 23:58:37.611042 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.612544 kubelet[2604]: E0912 23:58:37.612390 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.612544 kubelet[2604]: W0912 23:58:37.612408 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.612544 kubelet[2604]: E0912 23:58:37.612422 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.612974 kubelet[2604]: E0912 23:58:37.612888 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.612974 kubelet[2604]: W0912 23:58:37.612900 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.612974 kubelet[2604]: E0912 23:58:37.612911 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.613357 kubelet[2604]: E0912 23:58:37.613264 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.613357 kubelet[2604]: W0912 23:58:37.613276 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.613357 kubelet[2604]: E0912 23:58:37.613286 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.614942 kubelet[2604]: E0912 23:58:37.614821 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.614942 kubelet[2604]: W0912 23:58:37.614838 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.614942 kubelet[2604]: E0912 23:58:37.614854 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.615395 kubelet[2604]: E0912 23:58:37.615283 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.615395 kubelet[2604]: W0912 23:58:37.615295 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.615395 kubelet[2604]: E0912 23:58:37.615307 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.615808 kubelet[2604]: E0912 23:58:37.615712 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.615808 kubelet[2604]: W0912 23:58:37.615725 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.615808 kubelet[2604]: E0912 23:58:37.615735 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.616834 kubelet[2604]: E0912 23:58:37.616703 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.616834 kubelet[2604]: W0912 23:58:37.616718 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.616834 kubelet[2604]: E0912 23:58:37.616729 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.617138 kubelet[2604]: E0912 23:58:37.617047 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.617138 kubelet[2604]: W0912 23:58:37.617059 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.617138 kubelet[2604]: E0912 23:58:37.617069 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.617698 kubelet[2604]: E0912 23:58:37.617683 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.617986 kubelet[2604]: W0912 23:58:37.617779 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.617986 kubelet[2604]: E0912 23:58:37.617797 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.618735 kubelet[2604]: E0912 23:58:37.618449 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.618901 kubelet[2604]: W0912 23:58:37.618828 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.618901 kubelet[2604]: E0912 23:58:37.618852 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.622273 kubelet[2604]: E0912 23:58:37.621805 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.622560 kubelet[2604]: W0912 23:58:37.622432 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.622560 kubelet[2604]: E0912 23:58:37.622471 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.623407 kubelet[2604]: E0912 23:58:37.623208 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.623407 kubelet[2604]: W0912 23:58:37.623227 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.623407 kubelet[2604]: E0912 23:58:37.623244 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.624889 kubelet[2604]: E0912 23:58:37.624167 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.624889 kubelet[2604]: W0912 23:58:37.624185 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.624889 kubelet[2604]: E0912 23:58:37.624201 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.626385 kubelet[2604]: E0912 23:58:37.626361 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.626486 kubelet[2604]: W0912 23:58:37.626471 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.626689 kubelet[2604]: E0912 23:58:37.626542 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.626689 kubelet[2604]: I0912 23:58:37.626635 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c4685c7-dc20-48ea-a187-79c68ef78d4c-socket-dir\") pod \"csi-node-driver-fd2gt\" (UID: \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\") " pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:37.627059 kubelet[2604]: E0912 23:58:37.627030 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.627059 kubelet[2604]: W0912 23:58:37.627057 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.627210 kubelet[2604]: E0912 23:58:37.627071 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.627808 kubelet[2604]: E0912 23:58:37.627791 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.627808 kubelet[2604]: W0912 23:58:37.627807 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.627876 kubelet[2604]: E0912 23:58:37.627821 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.628694 kubelet[2604]: E0912 23:58:37.628059 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.628694 kubelet[2604]: W0912 23:58:37.628068 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.628694 kubelet[2604]: E0912 23:58:37.628079 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.628694 kubelet[2604]: I0912 23:58:37.628162 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c4685c7-dc20-48ea-a187-79c68ef78d4c-varrun\") pod \"csi-node-driver-fd2gt\" (UID: \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\") " pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:37.628694 kubelet[2604]: E0912 23:58:37.628405 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.628694 kubelet[2604]: W0912 23:58:37.628416 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.628694 kubelet[2604]: E0912 23:58:37.628428 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.628694 kubelet[2604]: I0912 23:58:37.628443 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c4685c7-dc20-48ea-a187-79c68ef78d4c-kubelet-dir\") pod \"csi-node-driver-fd2gt\" (UID: \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\") " pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:37.628916 kubelet[2604]: E0912 23:58:37.628789 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.628916 kubelet[2604]: W0912 23:58:37.628907 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.629146 kubelet[2604]: E0912 23:58:37.628926 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.629146 kubelet[2604]: I0912 23:58:37.628953 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c4685c7-dc20-48ea-a187-79c68ef78d4c-registration-dir\") pod \"csi-node-driver-fd2gt\" (UID: \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\") " pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:37.629556 kubelet[2604]: E0912 23:58:37.629342 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.629556 kubelet[2604]: W0912 23:58:37.629362 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.629556 kubelet[2604]: E0912 23:58:37.629377 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.630078 kubelet[2604]: E0912 23:58:37.629972 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.630577 kubelet[2604]: W0912 23:58:37.630271 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.630577 kubelet[2604]: E0912 23:58:37.630304 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.631724 kubelet[2604]: E0912 23:58:37.631701 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.631922 kubelet[2604]: W0912 23:58:37.631820 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.631922 kubelet[2604]: E0912 23:58:37.631849 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.632513 kubelet[2604]: E0912 23:58:37.632397 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.632838 kubelet[2604]: W0912 23:58:37.632719 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.633437 kubelet[2604]: E0912 23:58:37.633024 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.634615 kubelet[2604]: E0912 23:58:37.634283 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.634615 kubelet[2604]: W0912 23:58:37.634301 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.634615 kubelet[2604]: E0912 23:58:37.634319 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.634615 kubelet[2604]: I0912 23:58:37.634391 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7cbv\" (UniqueName: \"kubernetes.io/projected/0c4685c7-dc20-48ea-a187-79c68ef78d4c-kube-api-access-h7cbv\") pod \"csi-node-driver-fd2gt\" (UID: \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\") " pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:37.636118 kubelet[2604]: E0912 23:58:37.635731 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.636118 kubelet[2604]: W0912 23:58:37.635752 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.636118 kubelet[2604]: E0912 23:58:37.635783 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.637926 kubelet[2604]: E0912 23:58:37.637764 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.637926 kubelet[2604]: W0912 23:58:37.637790 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.637926 kubelet[2604]: E0912 23:58:37.637815 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.638522 kubelet[2604]: E0912 23:58:37.638338 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.638522 kubelet[2604]: W0912 23:58:37.638355 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.638522 kubelet[2604]: E0912 23:58:37.638371 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.641050 kubelet[2604]: E0912 23:58:37.640970 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.641050 kubelet[2604]: W0912 23:58:37.640996 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.641050 kubelet[2604]: E0912 23:58:37.641019 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.669984 containerd[1466]: time="2025-09-12T23:58:37.669412290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d48479895-hp2vx,Uid:fcfcb340-3634-4cdc-b6bb-22e3fdc466f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"1705ddfbb8e9ffd62e8bc10a5004c53b9767ae016a441dc65f433c77dfe37117\"" Sep 12 23:58:37.670134 containerd[1466]: time="2025-09-12T23:58:37.669999285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rswvq,Uid:0449d45e-a751-47d3-9d3e-5a131b829a67,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:37.677311 containerd[1466]: time="2025-09-12T23:58:37.677080065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:58:37.722011 containerd[1466]: time="2025-09-12T23:58:37.721702805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:37.722011 containerd[1466]: time="2025-09-12T23:58:37.721785724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:37.722011 containerd[1466]: time="2025-09-12T23:58:37.721797044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:37.722011 containerd[1466]: time="2025-09-12T23:58:37.721913523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:37.737264 kubelet[2604]: E0912 23:58:37.737135 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.737264 kubelet[2604]: W0912 23:58:37.737163 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.737264 kubelet[2604]: E0912 23:58:37.737185 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.737820 kubelet[2604]: E0912 23:58:37.737784 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.737820 kubelet[2604]: W0912 23:58:37.737816 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.738065 kubelet[2604]: E0912 23:58:37.737837 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.738804 kubelet[2604]: E0912 23:58:37.738777 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.738804 kubelet[2604]: W0912 23:58:37.738802 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.738911 kubelet[2604]: E0912 23:58:37.738817 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.739869 kubelet[2604]: E0912 23:58:37.739717 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.739869 kubelet[2604]: W0912 23:58:37.739740 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.739869 kubelet[2604]: E0912 23:58:37.739754 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.742968 kubelet[2604]: E0912 23:58:37.740776 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.742968 kubelet[2604]: W0912 23:58:37.740794 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.742968 kubelet[2604]: E0912 23:58:37.740809 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.742968 kubelet[2604]: E0912 23:58:37.742363 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.742968 kubelet[2604]: W0912 23:58:37.742381 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.742968 kubelet[2604]: E0912 23:58:37.742400 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.743851 kubelet[2604]: E0912 23:58:37.743827 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.743851 kubelet[2604]: W0912 23:58:37.743846 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.743952 kubelet[2604]: E0912 23:58:37.743862 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.745162 kubelet[2604]: E0912 23:58:37.745079 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.745162 kubelet[2604]: W0912 23:58:37.745115 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.745162 kubelet[2604]: E0912 23:58:37.745131 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.748162 kubelet[2604]: E0912 23:58:37.747257 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.748162 kubelet[2604]: W0912 23:58:37.747291 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.748162 kubelet[2604]: E0912 23:58:37.747315 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.748958 kubelet[2604]: E0912 23:58:37.748727 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.748958 kubelet[2604]: W0912 23:58:37.748757 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.748958 kubelet[2604]: E0912 23:58:37.748779 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.750903 kubelet[2604]: E0912 23:58:37.750198 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.750903 kubelet[2604]: W0912 23:58:37.750220 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.750903 kubelet[2604]: E0912 23:58:37.750242 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.750903 kubelet[2604]: E0912 23:58:37.750834 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.750903 kubelet[2604]: W0912 23:58:37.750848 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.750903 kubelet[2604]: E0912 23:58:37.750861 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.751859 kubelet[2604]: E0912 23:58:37.751827 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.751859 kubelet[2604]: W0912 23:58:37.751846 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.751859 kubelet[2604]: E0912 23:58:37.751865 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.754048 kubelet[2604]: E0912 23:58:37.753936 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.754048 kubelet[2604]: W0912 23:58:37.753961 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.754048 kubelet[2604]: E0912 23:58:37.753980 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.754535 kubelet[2604]: E0912 23:58:37.754491 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.754535 kubelet[2604]: W0912 23:58:37.754513 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.754535 kubelet[2604]: E0912 23:58:37.754527 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.755576 kubelet[2604]: E0912 23:58:37.754992 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.755576 kubelet[2604]: W0912 23:58:37.755012 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.755576 kubelet[2604]: E0912 23:58:37.755023 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.755742 kubelet[2604]: E0912 23:58:37.755671 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.755742 kubelet[2604]: W0912 23:58:37.755688 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.755742 kubelet[2604]: E0912 23:58:37.755700 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.756923 kubelet[2604]: E0912 23:58:37.756881 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.756923 kubelet[2604]: W0912 23:58:37.756903 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.756923 kubelet[2604]: E0912 23:58:37.756918 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.757140 kubelet[2604]: E0912 23:58:37.757113 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.757140 kubelet[2604]: W0912 23:58:37.757126 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.757140 kubelet[2604]: E0912 23:58:37.757135 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.757938 kubelet[2604]: E0912 23:58:37.757897 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.757938 kubelet[2604]: W0912 23:58:37.757916 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.757938 kubelet[2604]: E0912 23:58:37.757929 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.759021 kubelet[2604]: E0912 23:58:37.758714 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.759021 kubelet[2604]: W0912 23:58:37.758733 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.759021 kubelet[2604]: E0912 23:58:37.758746 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.759215 kubelet[2604]: E0912 23:58:37.759041 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.759215 kubelet[2604]: W0912 23:58:37.759052 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.759215 kubelet[2604]: E0912 23:58:37.759062 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.760850 systemd[1]: Started cri-containerd-5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0.scope - libcontainer container 5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0. Sep 12 23:58:37.761215 kubelet[2604]: E0912 23:58:37.760882 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.761215 kubelet[2604]: W0912 23:58:37.760898 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.761215 kubelet[2604]: E0912 23:58:37.760913 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.761953 kubelet[2604]: E0912 23:58:37.761921 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.761953 kubelet[2604]: W0912 23:58:37.761944 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.761953 kubelet[2604]: E0912 23:58:37.761963 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.762522 kubelet[2604]: E0912 23:58:37.762488 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.762522 kubelet[2604]: W0912 23:58:37.762511 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.762522 kubelet[2604]: E0912 23:58:37.762524 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.801752 kubelet[2604]: E0912 23:58:37.801713 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:37.801752 kubelet[2604]: W0912 23:58:37.801741 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:37.801752 kubelet[2604]: E0912 23:58:37.801765 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:37.891475 containerd[1466]: time="2025-09-12T23:58:37.891416799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rswvq,Uid:0449d45e-a751-47d3-9d3e-5a131b829a67,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\"" Sep 12 23:58:39.072321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148199070.mount: Deactivated successfully. Sep 12 23:58:39.159291 kubelet[2604]: E0912 23:58:39.159184 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:39.670672 containerd[1466]: time="2025-09-12T23:58:39.669081931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:39.670672 containerd[1466]: time="2025-09-12T23:58:39.670568199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:58:39.671638 containerd[1466]: time="2025-09-12T23:58:39.671549311Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:39.674563 containerd[1466]: time="2025-09-12T23:58:39.674498607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:39.675568 containerd[1466]: time="2025-09-12T23:58:39.675525879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.998221856s" Sep 12 23:58:39.675568 containerd[1466]: time="2025-09-12T23:58:39.675567879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:58:39.680286 containerd[1466]: time="2025-09-12T23:58:39.679972523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:58:39.700017 containerd[1466]: time="2025-09-12T23:58:39.699978321Z" level=info msg="CreateContainer within sandbox \"1705ddfbb8e9ffd62e8bc10a5004c53b9767ae016a441dc65f433c77dfe37117\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:58:39.716493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352956994.mount: Deactivated successfully. Sep 12 23:58:39.720970 containerd[1466]: time="2025-09-12T23:58:39.720919432Z" level=info msg="CreateContainer within sandbox \"1705ddfbb8e9ffd62e8bc10a5004c53b9767ae016a441dc65f433c77dfe37117\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ac1cfa602c60316447269248d2d719b98712f0c0366f4cba13e4b54a3e6267a\"" Sep 12 23:58:39.721825 containerd[1466]: time="2025-09-12T23:58:39.721792385Z" level=info msg="StartContainer for \"0ac1cfa602c60316447269248d2d719b98712f0c0366f4cba13e4b54a3e6267a\"" Sep 12 23:58:39.762283 systemd[1]: Started cri-containerd-0ac1cfa602c60316447269248d2d719b98712f0c0366f4cba13e4b54a3e6267a.scope - libcontainer container 0ac1cfa602c60316447269248d2d719b98712f0c0366f4cba13e4b54a3e6267a. Sep 12 23:58:39.824584 containerd[1466]: time="2025-09-12T23:58:39.824532954Z" level=info msg="StartContainer for \"0ac1cfa602c60316447269248d2d719b98712f0c0366f4cba13e4b54a3e6267a\" returns successfully" Sep 12 23:58:40.344853 kubelet[2604]: E0912 23:58:40.344214 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.344853 kubelet[2604]: W0912 23:58:40.344662 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.344853 kubelet[2604]: E0912 23:58:40.344784 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.345622 kubelet[2604]: E0912 23:58:40.345574 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.345691 kubelet[2604]: W0912 23:58:40.345649 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.345691 kubelet[2604]: E0912 23:58:40.345680 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.347443 kubelet[2604]: E0912 23:58:40.347388 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.347443 kubelet[2604]: W0912 23:58:40.347432 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.347630 kubelet[2604]: E0912 23:58:40.347471 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.347909 kubelet[2604]: E0912 23:58:40.347855 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.347909 kubelet[2604]: W0912 23:58:40.347903 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.347989 kubelet[2604]: E0912 23:58:40.347923 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.349464 kubelet[2604]: E0912 23:58:40.349395 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.349464 kubelet[2604]: W0912 23:58:40.349441 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.349464 kubelet[2604]: E0912 23:58:40.349460 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.349874 kubelet[2604]: E0912 23:58:40.349831 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.349874 kubelet[2604]: W0912 23:58:40.349853 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.349874 kubelet[2604]: E0912 23:58:40.349865 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.350248 kubelet[2604]: E0912 23:58:40.350209 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.350248 kubelet[2604]: W0912 23:58:40.350229 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.350340 kubelet[2604]: E0912 23:58:40.350241 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.350531 kubelet[2604]: E0912 23:58:40.350490 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.350579 kubelet[2604]: W0912 23:58:40.350532 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.350579 kubelet[2604]: E0912 23:58:40.350547 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.351148 kubelet[2604]: E0912 23:58:40.351110 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.351148 kubelet[2604]: W0912 23:58:40.351133 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.351148 kubelet[2604]: E0912 23:58:40.351146 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.352185 kubelet[2604]: E0912 23:58:40.352147 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.352185 kubelet[2604]: W0912 23:58:40.352169 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.352185 kubelet[2604]: E0912 23:58:40.352182 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.352678 kubelet[2604]: E0912 23:58:40.352652 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.352678 kubelet[2604]: W0912 23:58:40.352672 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.352893 kubelet[2604]: E0912 23:58:40.352684 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.353009 kubelet[2604]: E0912 23:58:40.352985 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.353009 kubelet[2604]: W0912 23:58:40.353003 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.353195 kubelet[2604]: E0912 23:58:40.353014 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.353974 kubelet[2604]: E0912 23:58:40.353947 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.353974 kubelet[2604]: W0912 23:58:40.353969 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.354230 kubelet[2604]: E0912 23:58:40.353984 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.354393 kubelet[2604]: E0912 23:58:40.354326 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.354393 kubelet[2604]: W0912 23:58:40.354346 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.354393 kubelet[2604]: E0912 23:58:40.354356 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.355773 kubelet[2604]: E0912 23:58:40.355732 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.355773 kubelet[2604]: W0912 23:58:40.355760 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.355773 kubelet[2604]: E0912 23:58:40.355775 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.366338 kubelet[2604]: E0912 23:58:40.366300 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.366338 kubelet[2604]: W0912 23:58:40.366328 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.366338 kubelet[2604]: E0912 23:58:40.366350 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.366636 kubelet[2604]: E0912 23:58:40.366618 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.366636 kubelet[2604]: W0912 23:58:40.366633 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.366706 kubelet[2604]: E0912 23:58:40.366645 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.366859 kubelet[2604]: E0912 23:58:40.366836 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.366859 kubelet[2604]: W0912 23:58:40.366854 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.366926 kubelet[2604]: E0912 23:58:40.366866 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.367182 kubelet[2604]: E0912 23:58:40.367160 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.367182 kubelet[2604]: W0912 23:58:40.367175 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.367182 kubelet[2604]: E0912 23:58:40.367185 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.367412 kubelet[2604]: E0912 23:58:40.367387 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.367412 kubelet[2604]: W0912 23:58:40.367403 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.367412 kubelet[2604]: E0912 23:58:40.367412 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.367674 kubelet[2604]: E0912 23:58:40.367649 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.367674 kubelet[2604]: W0912 23:58:40.367670 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.367760 kubelet[2604]: E0912 23:58:40.367680 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.367937 kubelet[2604]: E0912 23:58:40.367912 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.367937 kubelet[2604]: W0912 23:58:40.367928 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.368009 kubelet[2604]: E0912 23:58:40.367938 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.368225 kubelet[2604]: E0912 23:58:40.368206 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.368225 kubelet[2604]: W0912 23:58:40.368220 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.368306 kubelet[2604]: E0912 23:58:40.368246 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.368508 kubelet[2604]: E0912 23:58:40.368480 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.368561 kubelet[2604]: W0912 23:58:40.368527 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.368561 kubelet[2604]: E0912 23:58:40.368539 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.369254 kubelet[2604]: E0912 23:58:40.369202 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.369254 kubelet[2604]: W0912 23:58:40.369225 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.369254 kubelet[2604]: E0912 23:58:40.369238 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.370104 kubelet[2604]: E0912 23:58:40.369525 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.370104 kubelet[2604]: W0912 23:58:40.369546 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.370104 kubelet[2604]: E0912 23:58:40.369556 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.370104 kubelet[2604]: E0912 23:58:40.369864 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.370104 kubelet[2604]: W0912 23:58:40.369876 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.370104 kubelet[2604]: E0912 23:58:40.369887 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.370323 kubelet[2604]: E0912 23:58:40.370311 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.370359 kubelet[2604]: W0912 23:58:40.370325 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.370359 kubelet[2604]: E0912 23:58:40.370337 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.370706 kubelet[2604]: E0912 23:58:40.370682 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.370706 kubelet[2604]: W0912 23:58:40.370698 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.370706 kubelet[2604]: E0912 23:58:40.370708 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.371090 kubelet[2604]: E0912 23:58:40.371049 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.371090 kubelet[2604]: W0912 23:58:40.371081 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.371150 kubelet[2604]: E0912 23:58:40.371093 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.371486 kubelet[2604]: E0912 23:58:40.371464 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.371486 kubelet[2604]: W0912 23:58:40.371483 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.371561 kubelet[2604]: E0912 23:58:40.371494 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.372208 kubelet[2604]: E0912 23:58:40.371995 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.372208 kubelet[2604]: W0912 23:58:40.372018 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.372208 kubelet[2604]: E0912 23:58:40.372086 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.374160 kubelet[2604]: E0912 23:58:40.374129 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:58:40.374352 kubelet[2604]: W0912 23:58:40.374282 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:58:40.374352 kubelet[2604]: E0912 23:58:40.374312 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:58:40.960927 containerd[1466]: time="2025-09-12T23:58:40.960823000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:40.962732 containerd[1466]: time="2025-09-12T23:58:40.962659346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:58:40.963917 containerd[1466]: time="2025-09-12T23:58:40.963808417Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:40.966167 containerd[1466]: time="2025-09-12T23:58:40.966099079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:40.967183 containerd[1466]: time="2025-09-12T23:58:40.966866033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.286336355s" Sep 12 23:58:40.967183 containerd[1466]: time="2025-09-12T23:58:40.966914312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:58:40.974415 containerd[1466]: time="2025-09-12T23:58:40.974253574Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:58:40.993923 containerd[1466]: time="2025-09-12T23:58:40.993769820Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7\"" Sep 12 23:58:40.996068 containerd[1466]: time="2025-09-12T23:58:40.994617574Z" level=info msg="StartContainer for \"5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7\"" Sep 12 23:58:41.042890 systemd[1]: Started cri-containerd-5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7.scope - libcontainer container 5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7. Sep 12 23:58:41.082180 containerd[1466]: time="2025-09-12T23:58:41.082015500Z" level=info msg="StartContainer for \"5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7\" returns successfully" Sep 12 23:58:41.102437 systemd[1]: cri-containerd-5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7.scope: Deactivated successfully. Sep 12 23:58:41.160711 kubelet[2604]: E0912 23:58:41.160397 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:41.242950 containerd[1466]: time="2025-09-12T23:58:41.242848861Z" level=info msg="shim disconnected" id=5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7 namespace=k8s.io Sep 12 23:58:41.243727 containerd[1466]: time="2025-09-12T23:58:41.243414257Z" level=warning msg="cleaning up after shim disconnected" id=5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7 namespace=k8s.io Sep 12 23:58:41.243727 containerd[1466]: time="2025-09-12T23:58:41.243464897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:41.308080 kubelet[2604]: I0912 23:58:41.307960 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:58:41.311785 containerd[1466]: time="2025-09-12T23:58:41.311663372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:58:41.328733 kubelet[2604]: I0912 23:58:41.328639 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d48479895-hp2vx" podStartSLOduration=2.3258353019999998 podStartE2EDuration="4.328592001s" podCreationTimestamp="2025-09-12 23:58:37 +0000 UTC" firstStartedPulling="2025-09-12 23:58:37.675873315 +0000 UTC m=+24.667559564" lastFinishedPulling="2025-09-12 23:58:39.678629894 +0000 UTC m=+26.670316263" observedRunningTime="2025-09-12 23:58:40.320268291 +0000 UTC m=+27.311954540" watchObservedRunningTime="2025-09-12 23:58:41.328592001 +0000 UTC m=+28.320278250" Sep 12 23:58:41.690564 systemd[1]: run-containerd-runc-k8s.io-5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7-runc.zCCuBy.mount: Deactivated successfully. Sep 12 23:58:41.691893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd1cdbd0377126a5d8c1ac3fdb4e9aacc8e79c45e290dd0d87d1686d38cfbb7-rootfs.mount: Deactivated successfully. Sep 12 23:58:43.159641 kubelet[2604]: E0912 23:58:43.157918 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:43.652720 containerd[1466]: time="2025-09-12T23:58:43.652654623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:43.653998 containerd[1466]: time="2025-09-12T23:58:43.653946573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:58:43.655658 containerd[1466]: time="2025-09-12T23:58:43.654876606Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:43.658730 containerd[1466]: time="2025-09-12T23:58:43.658670899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:43.659886 containerd[1466]: time="2025-09-12T23:58:43.659849890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.348127599s" Sep 12 23:58:43.660011 containerd[1466]: time="2025-09-12T23:58:43.659994089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:58:43.665942 containerd[1466]: time="2025-09-12T23:58:43.665874605Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:58:43.683190 containerd[1466]: time="2025-09-12T23:58:43.683120839Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3\"" Sep 12 23:58:43.686120 containerd[1466]: time="2025-09-12T23:58:43.686004817Z" level=info msg="StartContainer for \"4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3\"" Sep 12 23:58:43.722709 systemd[1]: run-containerd-runc-k8s.io-4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3-runc.NK8K1A.mount: Deactivated successfully. Sep 12 23:58:43.732864 systemd[1]: Started cri-containerd-4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3.scope - libcontainer container 4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3. Sep 12 23:58:43.769738 containerd[1466]: time="2025-09-12T23:58:43.769689201Z" level=info msg="StartContainer for \"4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3\" returns successfully" Sep 12 23:58:44.307336 systemd[1]: cri-containerd-4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3.scope: Deactivated successfully. Sep 12 23:58:44.316835 kubelet[2604]: I0912 23:58:44.315776 2604 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:58:44.358531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3-rootfs.mount: Deactivated successfully. Sep 12 23:58:44.447481 containerd[1466]: time="2025-09-12T23:58:44.446818085Z" level=info msg="shim disconnected" id=4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3 namespace=k8s.io Sep 12 23:58:44.447481 containerd[1466]: time="2025-09-12T23:58:44.446881364Z" level=warning msg="cleaning up after shim disconnected" id=4db53828c6559da1b9912de588bb9c0674d0baba13deac7e5d3ba88ad01349f3 namespace=k8s.io Sep 12 23:58:44.447481 containerd[1466]: time="2025-09-12T23:58:44.446890724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:44.454790 systemd[1]: Created slice kubepods-besteffort-pod7464ec4d_b10e_40dc_b6fa_a3b3579b9606.slice - libcontainer container kubepods-besteffort-pod7464ec4d_b10e_40dc_b6fa_a3b3579b9606.slice. Sep 12 23:58:44.470311 systemd[1]: Created slice kubepods-besteffort-pod4f365313_855d_43b4_a047_74e3275b3496.slice - libcontainer container kubepods-besteffort-pod4f365313_855d_43b4_a047_74e3275b3496.slice. Sep 12 23:58:44.489174 systemd[1]: Created slice kubepods-burstable-pod476a73de_395b_42d1_aef0_13cde14fd9c8.slice - libcontainer container kubepods-burstable-pod476a73de_395b_42d1_aef0_13cde14fd9c8.slice. Sep 12 23:58:44.498860 systemd[1]: Created slice kubepods-besteffort-podecc94c4e_e941_405c_8fac_8111372704be.slice - libcontainer container kubepods-besteffort-podecc94c4e_e941_405c_8fac_8111372704be.slice. Sep 12 23:58:44.501281 kubelet[2604]: I0912 23:58:44.501249 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecc94c4e-e941-405c-8fac-8111372704be-whisker-backend-key-pair\") pod \"whisker-674b8c84d8-crn2g\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " pod="calico-system/whisker-674b8c84d8-crn2g" Sep 12 23:58:44.502039 kubelet[2604]: I0912 23:58:44.501992 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/476a73de-395b-42d1-aef0-13cde14fd9c8-config-volume\") pod \"coredns-674b8bbfcf-4mjz4\" (UID: \"476a73de-395b-42d1-aef0-13cde14fd9c8\") " pod="kube-system/coredns-674b8bbfcf-4mjz4" Sep 12 23:58:44.502191 kubelet[2604]: I0912 23:58:44.502171 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnpmj\" (UniqueName: \"kubernetes.io/projected/476a73de-395b-42d1-aef0-13cde14fd9c8-kube-api-access-fnpmj\") pod \"coredns-674b8bbfcf-4mjz4\" (UID: \"476a73de-395b-42d1-aef0-13cde14fd9c8\") " pod="kube-system/coredns-674b8bbfcf-4mjz4" Sep 12 23:58:44.502742 kubelet[2604]: I0912 23:58:44.502269 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzk2h\" (UniqueName: \"kubernetes.io/projected/ea8dd6bf-8417-4740-90cb-3ee84da12ecc-kube-api-access-rzk2h\") pod \"goldmane-54d579b49d-vfdq2\" (UID: \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\") " pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:44.502742 kubelet[2604]: I0912 23:58:44.502305 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2850e962-1009-4f88-9096-205a9a322adb-calico-apiserver-certs\") pod \"calico-apiserver-bb8b78f84-b4ll8\" (UID: \"2850e962-1009-4f88-9096-205a9a322adb\") " pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" Sep 12 23:58:44.502742 kubelet[2604]: I0912 23:58:44.502331 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rk9d\" (UniqueName: \"kubernetes.io/projected/ecc94c4e-e941-405c-8fac-8111372704be-kube-api-access-4rk9d\") pod \"whisker-674b8c84d8-crn2g\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " pod="calico-system/whisker-674b8c84d8-crn2g" Sep 12 23:58:44.502742 kubelet[2604]: I0912 23:58:44.502352 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc94c4e-e941-405c-8fac-8111372704be-whisker-ca-bundle\") pod \"whisker-674b8c84d8-crn2g\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " pod="calico-system/whisker-674b8c84d8-crn2g" Sep 12 23:58:44.502742 kubelet[2604]: I0912 23:58:44.502373 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpl2\" (UniqueName: \"kubernetes.io/projected/4f365313-855d-43b4-a047-74e3275b3496-kube-api-access-wjpl2\") pod \"calico-apiserver-bb8b78f84-8mxsb\" (UID: \"4f365313-855d-43b4-a047-74e3275b3496\") " pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" Sep 12 23:58:44.502944 kubelet[2604]: I0912 23:58:44.502390 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea8dd6bf-8417-4740-90cb-3ee84da12ecc-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-vfdq2\" (UID: \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\") " pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:44.502944 kubelet[2604]: I0912 23:58:44.502407 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ea8dd6bf-8417-4740-90cb-3ee84da12ecc-goldmane-key-pair\") pod \"goldmane-54d579b49d-vfdq2\" (UID: \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\") " pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:44.502944 kubelet[2604]: I0912 23:58:44.502431 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f365313-855d-43b4-a047-74e3275b3496-calico-apiserver-certs\") pod \"calico-apiserver-bb8b78f84-8mxsb\" (UID: \"4f365313-855d-43b4-a047-74e3275b3496\") " pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" Sep 12 23:58:44.502944 kubelet[2604]: I0912 23:58:44.502447 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8dd6bf-8417-4740-90cb-3ee84da12ecc-config\") pod \"goldmane-54d579b49d-vfdq2\" (UID: \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\") " pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:44.502944 kubelet[2604]: I0912 23:58:44.502467 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd82b01-39de-4359-8408-91804a116783-config-volume\") pod \"coredns-674b8bbfcf-lp4ss\" (UID: \"dfd82b01-39de-4359-8408-91804a116783\") " pod="kube-system/coredns-674b8bbfcf-lp4ss" Sep 12 23:58:44.503113 kubelet[2604]: I0912 23:58:44.502494 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7464ec4d-b10e-40dc-b6fa-a3b3579b9606-tigera-ca-bundle\") pod \"calico-kube-controllers-68cf785d8c-6sttq\" (UID: \"7464ec4d-b10e-40dc-b6fa-a3b3579b9606\") " pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" Sep 12 23:58:44.503113 kubelet[2604]: I0912 23:58:44.502512 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrzl5\" (UniqueName: \"kubernetes.io/projected/7464ec4d-b10e-40dc-b6fa-a3b3579b9606-kube-api-access-wrzl5\") pod \"calico-kube-controllers-68cf785d8c-6sttq\" (UID: \"7464ec4d-b10e-40dc-b6fa-a3b3579b9606\") " pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" Sep 12 23:58:44.503113 kubelet[2604]: I0912 23:58:44.502550 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxsxx\" (UniqueName: \"kubernetes.io/projected/2850e962-1009-4f88-9096-205a9a322adb-kube-api-access-bxsxx\") pod \"calico-apiserver-bb8b78f84-b4ll8\" (UID: \"2850e962-1009-4f88-9096-205a9a322adb\") " pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" Sep 12 23:58:44.503113 kubelet[2604]: I0912 23:58:44.502569 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxx6r\" (UniqueName: \"kubernetes.io/projected/dfd82b01-39de-4359-8408-91804a116783-kube-api-access-hxx6r\") pod \"coredns-674b8bbfcf-lp4ss\" (UID: \"dfd82b01-39de-4359-8408-91804a116783\") " pod="kube-system/coredns-674b8bbfcf-lp4ss" Sep 12 23:58:44.507196 systemd[1]: Created slice kubepods-burstable-poddfd82b01_39de_4359_8408_91804a116783.slice - libcontainer container kubepods-burstable-poddfd82b01_39de_4359_8408_91804a116783.slice. Sep 12 23:58:44.517538 systemd[1]: Created slice kubepods-besteffort-podea8dd6bf_8417_4740_90cb_3ee84da12ecc.slice - libcontainer container kubepods-besteffort-podea8dd6bf_8417_4740_90cb_3ee84da12ecc.slice. Sep 12 23:58:44.527304 systemd[1]: Created slice kubepods-besteffort-pod2850e962_1009_4f88_9096_205a9a322adb.slice - libcontainer container kubepods-besteffort-pod2850e962_1009_4f88_9096_205a9a322adb.slice. Sep 12 23:58:44.774095 containerd[1466]: time="2025-09-12T23:58:44.773141893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68cf785d8c-6sttq,Uid:7464ec4d-b10e-40dc-b6fa-a3b3579b9606,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:44.781295 containerd[1466]: time="2025-09-12T23:58:44.781248514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-8mxsb,Uid:4f365313-855d-43b4-a047-74e3275b3496,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:58:44.800671 containerd[1466]: time="2025-09-12T23:58:44.800623095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mjz4,Uid:476a73de-395b-42d1-aef0-13cde14fd9c8,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:44.808877 containerd[1466]: time="2025-09-12T23:58:44.808803796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674b8c84d8-crn2g,Uid:ecc94c4e-e941-405c-8fac-8111372704be,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:44.813511 containerd[1466]: time="2025-09-12T23:58:44.813429402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lp4ss,Uid:dfd82b01-39de-4359-8408-91804a116783,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:44.827726 containerd[1466]: time="2025-09-12T23:58:44.826263110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vfdq2,Uid:ea8dd6bf-8417-4740-90cb-3ee84da12ecc,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:44.831219 containerd[1466]: time="2025-09-12T23:58:44.831162554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-b4ll8,Uid:2850e962-1009-4f88-9096-205a9a322adb,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:58:45.030345 containerd[1466]: time="2025-09-12T23:58:45.030187564Z" level=error msg="Failed to destroy network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.030750 containerd[1466]: time="2025-09-12T23:58:45.030546842Z" level=error msg="encountered an error cleaning up failed sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.030750 containerd[1466]: time="2025-09-12T23:58:45.030650441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-8mxsb,Uid:4f365313-855d-43b4-a047-74e3275b3496,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.031407 kubelet[2604]: E0912 23:58:45.031290 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.031627 kubelet[2604]: E0912 23:58:45.031441 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" Sep 12 23:58:45.031627 kubelet[2604]: E0912 23:58:45.031478 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" Sep 12 23:58:45.031930 kubelet[2604]: E0912 23:58:45.031668 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bb8b78f84-8mxsb_calico-apiserver(4f365313-855d-43b4-a047-74e3275b3496)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bb8b78f84-8mxsb_calico-apiserver(4f365313-855d-43b4-a047-74e3275b3496)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" podUID="4f365313-855d-43b4-a047-74e3275b3496" Sep 12 23:58:45.059402 containerd[1466]: time="2025-09-12T23:58:45.059092680Z" level=error msg="Failed to destroy network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.059520 containerd[1466]: time="2025-09-12T23:58:45.059497557Z" level=error msg="encountered an error cleaning up failed sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.059708 containerd[1466]: time="2025-09-12T23:58:45.059670156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lp4ss,Uid:dfd82b01-39de-4359-8408-91804a116783,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.060566 kubelet[2604]: E0912 23:58:45.060515 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.060688 kubelet[2604]: E0912 23:58:45.060591 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lp4ss" Sep 12 23:58:45.060688 kubelet[2604]: E0912 23:58:45.060654 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lp4ss" Sep 12 23:58:45.061243 kubelet[2604]: E0912 23:58:45.060795 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lp4ss_kube-system(dfd82b01-39de-4359-8408-91804a116783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lp4ss_kube-system(dfd82b01-39de-4359-8408-91804a116783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lp4ss" podUID="dfd82b01-39de-4359-8408-91804a116783" Sep 12 23:58:45.072795 containerd[1466]: time="2025-09-12T23:58:45.072739503Z" level=error msg="Failed to destroy network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.073361 containerd[1466]: time="2025-09-12T23:58:45.073326659Z" level=error msg="encountered an error cleaning up failed sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.073506 containerd[1466]: time="2025-09-12T23:58:45.073482218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68cf785d8c-6sttq,Uid:7464ec4d-b10e-40dc-b6fa-a3b3579b9606,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.074030 kubelet[2604]: E0912 23:58:45.073974 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.074221 kubelet[2604]: E0912 23:58:45.074145 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" Sep 12 23:58:45.074264 kubelet[2604]: E0912 23:58:45.074227 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" Sep 12 23:58:45.075222 kubelet[2604]: E0912 23:58:45.075158 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68cf785d8c-6sttq_calico-system(7464ec4d-b10e-40dc-b6fa-a3b3579b9606)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68cf785d8c-6sttq_calico-system(7464ec4d-b10e-40dc-b6fa-a3b3579b9606)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" podUID="7464ec4d-b10e-40dc-b6fa-a3b3579b9606" Sep 12 23:58:45.084349 containerd[1466]: time="2025-09-12T23:58:45.084268062Z" level=error msg="Failed to destroy network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.085654 containerd[1466]: time="2025-09-12T23:58:45.085530133Z" level=error msg="encountered an error cleaning up failed sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.085882 containerd[1466]: time="2025-09-12T23:58:45.085820491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674b8c84d8-crn2g,Uid:ecc94c4e-e941-405c-8fac-8111372704be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.086566 kubelet[2604]: E0912 23:58:45.086462 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.089033 kubelet[2604]: E0912 23:58:45.087228 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674b8c84d8-crn2g" Sep 12 23:58:45.089033 kubelet[2604]: E0912 23:58:45.087282 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674b8c84d8-crn2g" Sep 12 23:58:45.089033 kubelet[2604]: E0912 23:58:45.087671 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-674b8c84d8-crn2g_calico-system(ecc94c4e-e941-405c-8fac-8111372704be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-674b8c84d8-crn2g_calico-system(ecc94c4e-e941-405c-8fac-8111372704be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674b8c84d8-crn2g" podUID="ecc94c4e-e941-405c-8fac-8111372704be" Sep 12 23:58:45.097443 containerd[1466]: time="2025-09-12T23:58:45.097377569Z" level=error msg="Failed to destroy network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.097983 containerd[1466]: time="2025-09-12T23:58:45.097948565Z" level=error msg="encountered an error cleaning up failed sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.098128 containerd[1466]: time="2025-09-12T23:58:45.098103004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-b4ll8,Uid:2850e962-1009-4f88-9096-205a9a322adb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.098515 kubelet[2604]: E0912 23:58:45.098462 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.098589 kubelet[2604]: E0912 23:58:45.098540 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" Sep 12 23:58:45.098589 kubelet[2604]: E0912 23:58:45.098567 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" Sep 12 23:58:45.098739 kubelet[2604]: E0912 23:58:45.098630 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bb8b78f84-b4ll8_calico-apiserver(2850e962-1009-4f88-9096-205a9a322adb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bb8b78f84-b4ll8_calico-apiserver(2850e962-1009-4f88-9096-205a9a322adb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" podUID="2850e962-1009-4f88-9096-205a9a322adb" Sep 12 23:58:45.108629 containerd[1466]: time="2025-09-12T23:58:45.108250213Z" level=error msg="Failed to destroy network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.109163 containerd[1466]: time="2025-09-12T23:58:45.109126846Z" level=error msg="Failed to destroy network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.109417 containerd[1466]: time="2025-09-12T23:58:45.109164126Z" level=error msg="encountered an error cleaning up failed sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.109485 containerd[1466]: time="2025-09-12T23:58:45.109453604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mjz4,Uid:476a73de-395b-42d1-aef0-13cde14fd9c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.110224 kubelet[2604]: E0912 23:58:45.110187 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.110321 kubelet[2604]: E0912 23:58:45.110250 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4mjz4" Sep 12 23:58:45.110321 kubelet[2604]: E0912 23:58:45.110270 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4mjz4" Sep 12 23:58:45.110377 kubelet[2604]: E0912 23:58:45.110321 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4mjz4_kube-system(476a73de-395b-42d1-aef0-13cde14fd9c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4mjz4_kube-system(476a73de-395b-42d1-aef0-13cde14fd9c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4mjz4" podUID="476a73de-395b-42d1-aef0-13cde14fd9c8" Sep 12 23:58:45.111072 containerd[1466]: time="2025-09-12T23:58:45.110874594Z" level=error msg="encountered an error cleaning up failed sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.111072 containerd[1466]: time="2025-09-12T23:58:45.110945474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vfdq2,Uid:ea8dd6bf-8417-4740-90cb-3ee84da12ecc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.111920 kubelet[2604]: E0912 23:58:45.111877 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.112037 kubelet[2604]: E0912 23:58:45.111936 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:45.112037 kubelet[2604]: E0912 23:58:45.111955 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vfdq2" Sep 12 23:58:45.112037 kubelet[2604]: E0912 23:58:45.112022 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-vfdq2_calico-system(ea8dd6bf-8417-4740-90cb-3ee84da12ecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-vfdq2_calico-system(ea8dd6bf-8417-4740-90cb-3ee84da12ecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-vfdq2" podUID="ea8dd6bf-8417-4740-90cb-3ee84da12ecc" Sep 12 23:58:45.169928 systemd[1]: Created slice kubepods-besteffort-pod0c4685c7_dc20_48ea_a187_79c68ef78d4c.slice - libcontainer container kubepods-besteffort-pod0c4685c7_dc20_48ea_a187_79c68ef78d4c.slice. Sep 12 23:58:45.173560 containerd[1466]: time="2025-09-12T23:58:45.173121394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd2gt,Uid:0c4685c7-dc20-48ea-a187-79c68ef78d4c,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:45.240787 containerd[1466]: time="2025-09-12T23:58:45.240733957Z" level=error msg="Failed to destroy network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.241411 containerd[1466]: time="2025-09-12T23:58:45.241377432Z" level=error msg="encountered an error cleaning up failed sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.241570 containerd[1466]: time="2025-09-12T23:58:45.241545191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd2gt,Uid:0c4685c7-dc20-48ea-a187-79c68ef78d4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.241993 kubelet[2604]: E0912 23:58:45.241952 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.242125 kubelet[2604]: E0912 23:58:45.242033 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:45.242125 kubelet[2604]: E0912 23:58:45.242057 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd2gt" Sep 12 23:58:45.242220 kubelet[2604]: E0912 23:58:45.242112 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fd2gt_calico-system(0c4685c7-dc20-48ea-a187-79c68ef78d4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fd2gt_calico-system(0c4685c7-dc20-48ea-a187-79c68ef78d4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:45.329383 containerd[1466]: time="2025-09-12T23:58:45.328145379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:58:45.331668 kubelet[2604]: I0912 23:58:45.331605 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:45.334307 containerd[1466]: time="2025-09-12T23:58:45.333827739Z" level=info msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" Sep 12 23:58:45.334307 containerd[1466]: time="2025-09-12T23:58:45.334092137Z" level=info msg="Ensure that sandbox 664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a in task-service has been cleanup successfully" Sep 12 23:58:45.341340 kubelet[2604]: I0912 23:58:45.341311 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:45.343489 containerd[1466]: time="2025-09-12T23:58:45.343429671Z" level=info msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" Sep 12 23:58:45.344026 containerd[1466]: time="2025-09-12T23:58:45.343860908Z" level=info msg="Ensure that sandbox f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6 in task-service has been cleanup successfully" Sep 12 23:58:45.349172 kubelet[2604]: I0912 23:58:45.346829 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:58:45.349321 containerd[1466]: time="2025-09-12T23:58:45.348475835Z" level=info msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" Sep 12 23:58:45.350826 containerd[1466]: time="2025-09-12T23:58:45.350776099Z" level=info msg="Ensure that sandbox a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a in task-service has been cleanup successfully" Sep 12 23:58:45.354638 kubelet[2604]: I0912 23:58:45.354107 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:45.362057 containerd[1466]: time="2025-09-12T23:58:45.361974260Z" level=info msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" Sep 12 23:58:45.364203 containerd[1466]: time="2025-09-12T23:58:45.364145645Z" level=info msg="Ensure that sandbox 3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1 in task-service has been cleanup successfully" Sep 12 23:58:45.378546 kubelet[2604]: I0912 23:58:45.378512 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:45.382157 containerd[1466]: time="2025-09-12T23:58:45.382035918Z" level=info msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" Sep 12 23:58:45.382271 containerd[1466]: time="2025-09-12T23:58:45.382239877Z" level=info msg="Ensure that sandbox 0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6 in task-service has been cleanup successfully" Sep 12 23:58:45.385134 kubelet[2604]: I0912 23:58:45.384614 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:45.388510 containerd[1466]: time="2025-09-12T23:58:45.388465193Z" level=info msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" Sep 12 23:58:45.390072 containerd[1466]: time="2025-09-12T23:58:45.389384786Z" level=info msg="Ensure that sandbox b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785 in task-service has been cleanup successfully" Sep 12 23:58:45.402836 kubelet[2604]: I0912 23:58:45.401945 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:45.408561 containerd[1466]: time="2025-09-12T23:58:45.408516531Z" level=info msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" Sep 12 23:58:45.415921 containerd[1466]: time="2025-09-12T23:58:45.412640662Z" level=info msg="Ensure that sandbox adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e in task-service has been cleanup successfully" Sep 12 23:58:45.428164 kubelet[2604]: I0912 23:58:45.428129 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:45.429529 containerd[1466]: time="2025-09-12T23:58:45.429045306Z" level=info msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" Sep 12 23:58:45.429529 containerd[1466]: time="2025-09-12T23:58:45.429255265Z" level=info msg="Ensure that sandbox 4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f in task-service has been cleanup successfully" Sep 12 23:58:45.463253 containerd[1466]: time="2025-09-12T23:58:45.463195225Z" level=error msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" failed" error="failed to destroy network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.463730 kubelet[2604]: E0912 23:58:45.463694 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:58:45.463913 kubelet[2604]: E0912 23:58:45.463802 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a"} Sep 12 23:58:45.463913 kubelet[2604]: E0912 23:58:45.463951 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f365313-855d-43b4-a047-74e3275b3496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.463913 kubelet[2604]: E0912 23:58:45.463981 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f365313-855d-43b4-a047-74e3275b3496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" podUID="4f365313-855d-43b4-a047-74e3275b3496" Sep 12 23:58:45.493649 containerd[1466]: time="2025-09-12T23:58:45.493554890Z" level=error msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" failed" error="failed to destroy network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.494169 kubelet[2604]: E0912 23:58:45.493807 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:45.494169 kubelet[2604]: E0912 23:58:45.493863 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6"} Sep 12 23:58:45.494169 kubelet[2604]: E0912 23:58:45.493895 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7464ec4d-b10e-40dc-b6fa-a3b3579b9606\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.494169 kubelet[2604]: E0912 23:58:45.493917 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7464ec4d-b10e-40dc-b6fa-a3b3579b9606\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" podUID="7464ec4d-b10e-40dc-b6fa-a3b3579b9606" Sep 12 23:58:45.509230 containerd[1466]: time="2025-09-12T23:58:45.508994981Z" level=error msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" failed" error="failed to destroy network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.509909 kubelet[2604]: E0912 23:58:45.509492 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:45.509909 kubelet[2604]: E0912 23:58:45.509725 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785"} Sep 12 23:58:45.510439 kubelet[2604]: E0912 23:58:45.510238 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2850e962-1009-4f88-9096-205a9a322adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.510439 kubelet[2604]: E0912 23:58:45.510298 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2850e962-1009-4f88-9096-205a9a322adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" podUID="2850e962-1009-4f88-9096-205a9a322adb" Sep 12 23:58:45.513136 containerd[1466]: time="2025-09-12T23:58:45.512734315Z" level=error msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" failed" error="failed to destroy network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.513304 kubelet[2604]: E0912 23:58:45.512960 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:45.513304 kubelet[2604]: E0912 23:58:45.513034 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1"} Sep 12 23:58:45.513304 kubelet[2604]: E0912 23:58:45.513071 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecc94c4e-e941-405c-8fac-8111372704be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.513304 kubelet[2604]: E0912 23:58:45.513093 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecc94c4e-e941-405c-8fac-8111372704be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674b8c84d8-crn2g" podUID="ecc94c4e-e941-405c-8fac-8111372704be" Sep 12 23:58:45.514949 containerd[1466]: time="2025-09-12T23:58:45.514309584Z" level=error msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" failed" error="failed to destroy network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.515254 kubelet[2604]: E0912 23:58:45.514658 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:45.515254 kubelet[2604]: E0912 23:58:45.514744 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e"} Sep 12 23:58:45.515254 kubelet[2604]: E0912 23:58:45.514780 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.515254 kubelet[2604]: E0912 23:58:45.514901 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea8dd6bf-8417-4740-90cb-3ee84da12ecc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-vfdq2" podUID="ea8dd6bf-8417-4740-90cb-3ee84da12ecc" Sep 12 23:58:45.515411 containerd[1466]: time="2025-09-12T23:58:45.514883820Z" level=error msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" failed" error="failed to destroy network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.515865 kubelet[2604]: E0912 23:58:45.515560 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:45.515865 kubelet[2604]: E0912 23:58:45.515775 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a"} Sep 12 23:58:45.515865 kubelet[2604]: E0912 23:58:45.515808 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.515865 kubelet[2604]: E0912 23:58:45.515828 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c4685c7-dc20-48ea-a187-79c68ef78d4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fd2gt" podUID="0c4685c7-dc20-48ea-a187-79c68ef78d4c" Sep 12 23:58:45.523734 containerd[1466]: time="2025-09-12T23:58:45.523678838Z" level=error msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" failed" error="failed to destroy network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.524619 kubelet[2604]: E0912 23:58:45.524110 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:45.524619 kubelet[2604]: E0912 23:58:45.524165 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f"} Sep 12 23:58:45.524619 kubelet[2604]: E0912 23:58:45.524201 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dfd82b01-39de-4359-8408-91804a116783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.524619 kubelet[2604]: E0912 23:58:45.524227 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dfd82b01-39de-4359-8408-91804a116783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lp4ss" podUID="dfd82b01-39de-4359-8408-91804a116783" Sep 12 23:58:45.525878 containerd[1466]: time="2025-09-12T23:58:45.525816183Z" level=error msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" failed" error="failed to destroy network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:58:45.526212 kubelet[2604]: E0912 23:58:45.526164 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:45.526280 kubelet[2604]: E0912 23:58:45.526229 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6"} Sep 12 23:58:45.526280 kubelet[2604]: E0912 23:58:45.526272 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"476a73de-395b-42d1-aef0-13cde14fd9c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:58:45.526352 kubelet[2604]: E0912 23:58:45.526294 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"476a73de-395b-42d1-aef0-13cde14fd9c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4mjz4" podUID="476a73de-395b-42d1-aef0-13cde14fd9c8" Sep 12 23:58:45.683017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a-shm.mount: Deactivated successfully. Sep 12 23:58:45.683144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6-shm.mount: Deactivated successfully. Sep 12 23:58:46.949720 kubelet[2604]: I0912 23:58:46.949383 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:58:49.320363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456515983.mount: Deactivated successfully. Sep 12 23:58:49.347741 containerd[1466]: time="2025-09-12T23:58:49.347680606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:49.348670 containerd[1466]: time="2025-09-12T23:58:49.348631320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:58:49.350124 containerd[1466]: time="2025-09-12T23:58:49.350058750Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:49.352813 containerd[1466]: time="2025-09-12T23:58:49.352760373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:49.354118 containerd[1466]: time="2025-09-12T23:58:49.353716206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.025525347s" Sep 12 23:58:49.354118 containerd[1466]: time="2025-09-12T23:58:49.353764286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:58:49.380679 containerd[1466]: time="2025-09-12T23:58:49.380624909Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:58:49.400510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897860001.mount: Deactivated successfully. Sep 12 23:58:49.404123 containerd[1466]: time="2025-09-12T23:58:49.403927796Z" level=info msg="CreateContainer within sandbox \"5b048d35c35b055b5a3ab68608307af1953212e957905f928445826de9d439c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454\"" Sep 12 23:58:49.406485 containerd[1466]: time="2025-09-12T23:58:49.406438340Z" level=info msg="StartContainer for \"8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454\"" Sep 12 23:58:49.436860 systemd[1]: Started cri-containerd-8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454.scope - libcontainer container 8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454. Sep 12 23:58:49.533937 containerd[1466]: time="2025-09-12T23:58:49.533510024Z" level=info msg="StartContainer for \"8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454\" returns successfully" Sep 12 23:58:49.651039 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:58:49.651216 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:58:49.808362 containerd[1466]: time="2025-09-12T23:58:49.808304658Z" level=info msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.943 [INFO][3806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.945 [INFO][3806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" iface="eth0" netns="/var/run/netns/cni-3b67b3d3-6e62-d4ff-6653-4d024593d2ad" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.946 [INFO][3806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" iface="eth0" netns="/var/run/netns/cni-3b67b3d3-6e62-d4ff-6653-4d024593d2ad" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.946 [INFO][3806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" iface="eth0" netns="/var/run/netns/cni-3b67b3d3-6e62-d4ff-6653-4d024593d2ad" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.946 [INFO][3806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:49.947 [INFO][3806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.010 [INFO][3815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.010 [INFO][3815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.010 [INFO][3815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.026 [WARNING][3815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.026 [INFO][3815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.029 [INFO][3815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:50.036320 containerd[1466]: 2025-09-12 23:58:50.033 [INFO][3806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:58:50.036320 containerd[1466]: time="2025-09-12T23:58:50.036024844Z" level=info msg="TearDown network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" successfully" Sep 12 23:58:50.036320 containerd[1466]: time="2025-09-12T23:58:50.036063564Z" level=info msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" returns successfully" Sep 12 23:58:50.152545 kubelet[2604]: I0912 23:58:50.152390 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rk9d\" (UniqueName: \"kubernetes.io/projected/ecc94c4e-e941-405c-8fac-8111372704be-kube-api-access-4rk9d\") pod \"ecc94c4e-e941-405c-8fac-8111372704be\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " Sep 12 23:58:50.152545 kubelet[2604]: I0912 23:58:50.152462 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc94c4e-e941-405c-8fac-8111372704be-whisker-ca-bundle\") pod \"ecc94c4e-e941-405c-8fac-8111372704be\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " Sep 12 23:58:50.152545 kubelet[2604]: I0912 23:58:50.152517 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecc94c4e-e941-405c-8fac-8111372704be-whisker-backend-key-pair\") pod \"ecc94c4e-e941-405c-8fac-8111372704be\" (UID: \"ecc94c4e-e941-405c-8fac-8111372704be\") " Sep 12 23:58:50.157672 kubelet[2604]: I0912 23:58:50.157164 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc94c4e-e941-405c-8fac-8111372704be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ecc94c4e-e941-405c-8fac-8111372704be" (UID: "ecc94c4e-e941-405c-8fac-8111372704be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:58:50.159188 kubelet[2604]: I0912 23:58:50.159138 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc94c4e-e941-405c-8fac-8111372704be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ecc94c4e-e941-405c-8fac-8111372704be" (UID: "ecc94c4e-e941-405c-8fac-8111372704be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:58:50.161997 kubelet[2604]: I0912 23:58:50.161881 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc94c4e-e941-405c-8fac-8111372704be-kube-api-access-4rk9d" (OuterVolumeSpecName: "kube-api-access-4rk9d") pod "ecc94c4e-e941-405c-8fac-8111372704be" (UID: "ecc94c4e-e941-405c-8fac-8111372704be"). InnerVolumeSpecName "kube-api-access-4rk9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:58:50.253019 kubelet[2604]: I0912 23:58:50.252933 2604 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4rk9d\" (UniqueName: \"kubernetes.io/projected/ecc94c4e-e941-405c-8fac-8111372704be-kube-api-access-4rk9d\") on node \"ci-4081-3-5-n-f526684106\" DevicePath \"\"" Sep 12 23:58:50.253019 kubelet[2604]: I0912 23:58:50.253017 2604 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc94c4e-e941-405c-8fac-8111372704be-whisker-ca-bundle\") on node \"ci-4081-3-5-n-f526684106\" DevicePath \"\"" Sep 12 23:58:50.253019 kubelet[2604]: I0912 23:58:50.253030 2604 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecc94c4e-e941-405c-8fac-8111372704be-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-f526684106\" DevicePath \"\"" Sep 12 23:58:50.322642 systemd[1]: run-netns-cni\x2d3b67b3d3\x2d6e62\x2dd4ff\x2d6653\x2d4d024593d2ad.mount: Deactivated successfully. Sep 12 23:58:50.322754 systemd[1]: var-lib-kubelet-pods-ecc94c4e\x2de941\x2d405c\x2d8fac\x2d8111372704be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4rk9d.mount: Deactivated successfully. Sep 12 23:58:50.322815 systemd[1]: var-lib-kubelet-pods-ecc94c4e\x2de941\x2d405c\x2d8fac\x2d8111372704be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:58:50.459921 systemd[1]: Removed slice kubepods-besteffort-podecc94c4e_e941_405c_8fac_8111372704be.slice - libcontainer container kubepods-besteffort-podecc94c4e_e941_405c_8fac_8111372704be.slice. Sep 12 23:58:50.505939 kubelet[2604]: I0912 23:58:50.505835 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rswvq" podStartSLOduration=2.04523638 podStartE2EDuration="13.505808324s" podCreationTimestamp="2025-09-12 23:58:37 +0000 UTC" firstStartedPulling="2025-09-12 23:58:37.893966417 +0000 UTC m=+24.885652666" lastFinishedPulling="2025-09-12 23:58:49.354538361 +0000 UTC m=+36.346224610" observedRunningTime="2025-09-12 23:58:50.490817661 +0000 UTC m=+37.482503910" watchObservedRunningTime="2025-09-12 23:58:50.505808324 +0000 UTC m=+37.497494573" Sep 12 23:58:50.583919 systemd[1]: Created slice kubepods-besteffort-podfe76a03c_2a4f_4211_a0cb_27d583ba71d6.slice - libcontainer container kubepods-besteffort-podfe76a03c_2a4f_4211_a0cb_27d583ba71d6.slice. Sep 12 23:58:50.660409 kubelet[2604]: I0912 23:58:50.660339 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe76a03c-2a4f-4211-a0cb-27d583ba71d6-whisker-backend-key-pair\") pod \"whisker-5f89cd5fff-9mg8n\" (UID: \"fe76a03c-2a4f-4211-a0cb-27d583ba71d6\") " pod="calico-system/whisker-5f89cd5fff-9mg8n" Sep 12 23:58:50.660409 kubelet[2604]: I0912 23:58:50.660410 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe76a03c-2a4f-4211-a0cb-27d583ba71d6-whisker-ca-bundle\") pod \"whisker-5f89cd5fff-9mg8n\" (UID: \"fe76a03c-2a4f-4211-a0cb-27d583ba71d6\") " pod="calico-system/whisker-5f89cd5fff-9mg8n" Sep 12 23:58:50.660744 kubelet[2604]: I0912 23:58:50.660446 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dmb\" (UniqueName: \"kubernetes.io/projected/fe76a03c-2a4f-4211-a0cb-27d583ba71d6-kube-api-access-w6dmb\") pod \"whisker-5f89cd5fff-9mg8n\" (UID: \"fe76a03c-2a4f-4211-a0cb-27d583ba71d6\") " pod="calico-system/whisker-5f89cd5fff-9mg8n" Sep 12 23:58:50.892087 containerd[1466]: time="2025-09-12T23:58:50.891869626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89cd5fff-9mg8n,Uid:fe76a03c-2a4f-4211-a0cb-27d583ba71d6,Namespace:calico-system,Attempt:0,}" Sep 12 23:58:51.049832 systemd-networkd[1376]: cali506cf5897d1: Link UP Sep 12 23:58:51.050428 systemd-networkd[1376]: cali506cf5897d1: Gained carrier Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.934 [INFO][3837] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.952 [INFO][3837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0 whisker-5f89cd5fff- calico-system fe76a03c-2a4f-4211-a0cb-27d583ba71d6 918 0 2025-09-12 23:58:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f89cd5fff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 whisker-5f89cd5fff-9mg8n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali506cf5897d1 [] [] }} ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.953 [INFO][3837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.982 [INFO][3850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" HandleID="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.982 [INFO][3850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" HandleID="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-f526684106", "pod":"whisker-5f89cd5fff-9mg8n", "timestamp":"2025-09-12 23:58:50.982623039 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.982 [INFO][3850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.982 [INFO][3850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.982 [INFO][3850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:50.998 [INFO][3850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.005 [INFO][3850] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.011 [INFO][3850] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.015 [INFO][3850] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.020 [INFO][3850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.020 [INFO][3850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.023 [INFO][3850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.029 [INFO][3850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.038 [INFO][3850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.65/26] block=192.168.96.64/26 handle="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.038 [INFO][3850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.65/26] handle="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.038 [INFO][3850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:51.069939 containerd[1466]: 2025-09-12 23:58:51.038 [INFO][3850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.65/26] IPv6=[] ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" HandleID="k8s-pod-network.abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.040 [INFO][3837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0", GenerateName:"whisker-5f89cd5fff-", Namespace:"calico-system", SelfLink:"", UID:"fe76a03c-2a4f-4211-a0cb-27d583ba71d6", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f89cd5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"whisker-5f89cd5fff-9mg8n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali506cf5897d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.040 [INFO][3837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.65/32] ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.041 [INFO][3837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali506cf5897d1 ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.051 [INFO][3837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.051 [INFO][3837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0", GenerateName:"whisker-5f89cd5fff-", Namespace:"calico-system", SelfLink:"", UID:"fe76a03c-2a4f-4211-a0cb-27d583ba71d6", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f89cd5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f", Pod:"whisker-5f89cd5fff-9mg8n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali506cf5897d1", MAC:"7e:fe:c7:f2:1e:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:51.070569 containerd[1466]: 2025-09-12 23:58:51.066 [INFO][3837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f" Namespace="calico-system" Pod="whisker-5f89cd5fff-9mg8n" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--5f89cd5fff--9mg8n-eth0" Sep 12 23:58:51.091831 containerd[1466]: time="2025-09-12T23:58:51.091350624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:51.091831 containerd[1466]: time="2025-09-12T23:58:51.091432743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:51.091831 containerd[1466]: time="2025-09-12T23:58:51.091449023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:51.091831 containerd[1466]: time="2025-09-12T23:58:51.091539382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:51.112898 systemd[1]: Started cri-containerd-abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f.scope - libcontainer container abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f. Sep 12 23:58:51.154190 containerd[1466]: time="2025-09-12T23:58:51.152891071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89cd5fff-9mg8n,Uid:fe76a03c-2a4f-4211-a0cb-27d583ba71d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f\"" Sep 12 23:58:51.159631 containerd[1466]: time="2025-09-12T23:58:51.159559829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:58:51.163286 kubelet[2604]: I0912 23:58:51.163150 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc94c4e-e941-405c-8fac-8111372704be" path="/var/lib/kubelet/pods/ecc94c4e-e941-405c-8fac-8111372704be/volumes" Sep 12 23:58:51.515340 systemd[1]: run-containerd-runc-k8s.io-8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454-runc.rMKgOk.mount: Deactivated successfully. Sep 12 23:58:51.726758 kernel: bpftool[4047]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 23:58:51.927636 systemd-networkd[1376]: vxlan.calico: Link UP Sep 12 23:58:51.927650 systemd-networkd[1376]: vxlan.calico: Gained carrier Sep 12 23:58:52.493737 systemd[1]: run-containerd-runc-k8s.io-8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454-runc.H2hdgh.mount: Deactivated successfully. Sep 12 23:58:52.856555 systemd-networkd[1376]: cali506cf5897d1: Gained IPv6LL Sep 12 23:58:53.164300 containerd[1466]: time="2025-09-12T23:58:53.164160373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:53.165833 containerd[1466]: time="2025-09-12T23:58:53.165369405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:58:53.167380 containerd[1466]: time="2025-09-12T23:58:53.167313593Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:53.171663 containerd[1466]: time="2025-09-12T23:58:53.171271609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:53.171995 containerd[1466]: time="2025-09-12T23:58:53.171834365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.012212936s" Sep 12 23:58:53.171995 containerd[1466]: time="2025-09-12T23:58:53.171871725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:58:53.179104 containerd[1466]: time="2025-09-12T23:58:53.179035360Z" level=info msg="CreateContainer within sandbox \"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:58:53.195538 containerd[1466]: time="2025-09-12T23:58:53.195169700Z" level=info msg="CreateContainer within sandbox \"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"62fc447eb0f2af1fb5cd3a003b0bb7226f8bd96e697254f97b50137be944cc29\"" Sep 12 23:58:53.196309 containerd[1466]: time="2025-09-12T23:58:53.196272134Z" level=info msg="StartContainer for \"62fc447eb0f2af1fb5cd3a003b0bb7226f8bd96e697254f97b50137be944cc29\"" Sep 12 23:58:53.232852 systemd[1]: Started cri-containerd-62fc447eb0f2af1fb5cd3a003b0bb7226f8bd96e697254f97b50137be944cc29.scope - libcontainer container 62fc447eb0f2af1fb5cd3a003b0bb7226f8bd96e697254f97b50137be944cc29. Sep 12 23:58:53.283378 containerd[1466]: time="2025-09-12T23:58:53.283322874Z" level=info msg="StartContainer for \"62fc447eb0f2af1fb5cd3a003b0bb7226f8bd96e697254f97b50137be944cc29\" returns successfully" Sep 12 23:58:53.288066 containerd[1466]: time="2025-09-12T23:58:53.287723807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:58:53.686942 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Sep 12 23:58:55.018365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033086855.mount: Deactivated successfully. Sep 12 23:58:55.035850 containerd[1466]: time="2025-09-12T23:58:55.035795381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:55.037054 containerd[1466]: time="2025-09-12T23:58:55.036854974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:58:55.038719 containerd[1466]: time="2025-09-12T23:58:55.038349685Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:55.041698 containerd[1466]: time="2025-09-12T23:58:55.041654225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:55.042694 containerd[1466]: time="2025-09-12T23:58:55.042655779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.754882732s" Sep 12 23:58:55.042798 containerd[1466]: time="2025-09-12T23:58:55.042695899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:58:55.049293 containerd[1466]: time="2025-09-12T23:58:55.049143860Z" level=info msg="CreateContainer within sandbox \"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:58:55.066024 containerd[1466]: time="2025-09-12T23:58:55.065865239Z" level=info msg="CreateContainer within sandbox \"abffa14d309141620a1ebad38fd9f8027f4862684e4b1d2b810534a93803ee5f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5d02c0dd7775997ea879fdb8e6c3c72cefd793a8c0672d90ce426f6167e7f6e9\"" Sep 12 23:58:55.068452 containerd[1466]: time="2025-09-12T23:58:55.066990072Z" level=info msg="StartContainer for \"5d02c0dd7775997ea879fdb8e6c3c72cefd793a8c0672d90ce426f6167e7f6e9\"" Sep 12 23:58:55.103988 systemd[1]: Started cri-containerd-5d02c0dd7775997ea879fdb8e6c3c72cefd793a8c0672d90ce426f6167e7f6e9.scope - libcontainer container 5d02c0dd7775997ea879fdb8e6c3c72cefd793a8c0672d90ce426f6167e7f6e9. Sep 12 23:58:55.140163 containerd[1466]: time="2025-09-12T23:58:55.140089150Z" level=info msg="StartContainer for \"5d02c0dd7775997ea879fdb8e6c3c72cefd793a8c0672d90ce426f6167e7f6e9\" returns successfully" Sep 12 23:58:55.501846 kubelet[2604]: I0912 23:58:55.501675 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f89cd5fff-9mg8n" podStartSLOduration=1.613398889 podStartE2EDuration="5.501557087s" podCreationTimestamp="2025-09-12 23:58:50 +0000 UTC" firstStartedPulling="2025-09-12 23:58:51.155977052 +0000 UTC m=+38.147663301" lastFinishedPulling="2025-09-12 23:58:55.04413525 +0000 UTC m=+42.035821499" observedRunningTime="2025-09-12 23:58:55.499989296 +0000 UTC m=+42.491675585" watchObservedRunningTime="2025-09-12 23:58:55.501557087 +0000 UTC m=+42.493243336" Sep 12 23:58:57.165595 containerd[1466]: time="2025-09-12T23:58:57.165258969Z" level=info msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" Sep 12 23:58:57.170394 containerd[1466]: time="2025-09-12T23:58:57.166195404Z" level=info msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.270 [INFO][4248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.271 [INFO][4248] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" iface="eth0" netns="/var/run/netns/cni-8f21db77-0095-585f-5ca4-c5390195bd17" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.272 [INFO][4248] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" iface="eth0" netns="/var/run/netns/cni-8f21db77-0095-585f-5ca4-c5390195bd17" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.273 [INFO][4248] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" iface="eth0" netns="/var/run/netns/cni-8f21db77-0095-585f-5ca4-c5390195bd17" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.273 [INFO][4248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.273 [INFO][4248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.319 [INFO][4270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.319 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.319 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.335 [WARNING][4270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.335 [INFO][4270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.337 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:57.342331 containerd[1466]: 2025-09-12 23:58:57.340 [INFO][4248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:58:57.346126 containerd[1466]: time="2025-09-12T23:58:57.346071142Z" level=info msg="TearDown network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" successfully" Sep 12 23:58:57.346227 systemd[1]: run-netns-cni\x2d8f21db77\x2d0095\x2d585f\x2d5ca4\x2dc5390195bd17.mount: Deactivated successfully. Sep 12 23:58:57.347405 containerd[1466]: time="2025-09-12T23:58:57.347352974Z" level=info msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" returns successfully" Sep 12 23:58:57.349587 containerd[1466]: time="2025-09-12T23:58:57.349289043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68cf785d8c-6sttq,Uid:7464ec4d-b10e-40dc-b6fa-a3b3579b9606,Namespace:calico-system,Attempt:1,}" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.295 [INFO][4255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.295 [INFO][4255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" iface="eth0" netns="/var/run/netns/cni-2504bd99-4ca8-7d02-1761-174a8f68ef10" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.295 [INFO][4255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" iface="eth0" netns="/var/run/netns/cni-2504bd99-4ca8-7d02-1761-174a8f68ef10" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.296 [INFO][4255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" iface="eth0" netns="/var/run/netns/cni-2504bd99-4ca8-7d02-1761-174a8f68ef10" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.296 [INFO][4255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.296 [INFO][4255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.331 [INFO][4276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.331 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.337 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.357 [WARNING][4276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.357 [INFO][4276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.359 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:57.365651 containerd[1466]: 2025-09-12 23:58:57.362 [INFO][4255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:58:57.366092 containerd[1466]: time="2025-09-12T23:58:57.366002784Z" level=info msg="TearDown network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" successfully" Sep 12 23:58:57.366092 containerd[1466]: time="2025-09-12T23:58:57.366035424Z" level=info msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" returns successfully" Sep 12 23:58:57.368722 containerd[1466]: time="2025-09-12T23:58:57.367939533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vfdq2,Uid:ea8dd6bf-8417-4740-90cb-3ee84da12ecc,Namespace:calico-system,Attempt:1,}" Sep 12 23:58:57.373431 systemd[1]: run-netns-cni\x2d2504bd99\x2d4ca8\x2d7d02\x2d1761\x2d174a8f68ef10.mount: Deactivated successfully. Sep 12 23:58:57.561733 systemd-networkd[1376]: cali9320638a2fe: Link UP Sep 12 23:58:57.562141 systemd-networkd[1376]: cali9320638a2fe: Gained carrier Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.453 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0 calico-kube-controllers-68cf785d8c- calico-system 7464ec4d-b10e-40dc-b6fa-a3b3579b9606 955 0 2025-09-12 23:58:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68cf785d8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 calico-kube-controllers-68cf785d8c-6sttq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9320638a2fe [] [] }} ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.455 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.498 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" HandleID="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.498 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" HandleID="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-f526684106", "pod":"calico-kube-controllers-68cf785d8c-6sttq", "timestamp":"2025-09-12 23:58:57.498254203 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.498 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.498 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.498 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.514 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.521 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.528 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.531 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.536 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.536 [INFO][4309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.540 [INFO][4309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011 Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.545 [INFO][4309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.552 [INFO][4309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.66/26] block=192.168.96.64/26 handle="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.552 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.66/26] handle="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.553 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:57.587994 containerd[1466]: 2025-09-12 23:58:57.553 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.66/26] IPv6=[] ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" HandleID="k8s-pod-network.34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.556 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0", GenerateName:"calico-kube-controllers-68cf785d8c-", Namespace:"calico-system", SelfLink:"", UID:"7464ec4d-b10e-40dc-b6fa-a3b3579b9606", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68cf785d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"calico-kube-controllers-68cf785d8c-6sttq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9320638a2fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.556 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.66/32] ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.556 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9320638a2fe ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.567 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.568 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0", GenerateName:"calico-kube-controllers-68cf785d8c-", Namespace:"calico-system", SelfLink:"", UID:"7464ec4d-b10e-40dc-b6fa-a3b3579b9606", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68cf785d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011", Pod:"calico-kube-controllers-68cf785d8c-6sttq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9320638a2fe", MAC:"de:de:a5:1c:6f:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:57.589166 containerd[1466]: 2025-09-12 23:58:57.584 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011" Namespace="calico-system" Pod="calico-kube-controllers-68cf785d8c-6sttq" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:58:57.612322 containerd[1466]: time="2025-09-12T23:58:57.612154691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:57.612322 containerd[1466]: time="2025-09-12T23:58:57.612278250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:57.612322 containerd[1466]: time="2025-09-12T23:58:57.612294330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:57.612783 containerd[1466]: time="2025-09-12T23:58:57.612476289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:57.642861 systemd[1]: Started cri-containerd-34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011.scope - libcontainer container 34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011. Sep 12 23:58:57.678122 systemd-networkd[1376]: calic0df6ca3983: Link UP Sep 12 23:58:57.679052 systemd-networkd[1376]: calic0df6ca3983: Gained carrier Sep 12 23:58:57.712999 containerd[1466]: time="2025-09-12T23:58:57.711957422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68cf785d8c-6sttq,Uid:7464ec4d-b10e-40dc-b6fa-a3b3579b9606,Namespace:calico-system,Attempt:1,} returns sandbox id \"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011\"" Sep 12 23:58:57.718068 containerd[1466]: time="2025-09-12T23:58:57.717846187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.459 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0 goldmane-54d579b49d- calico-system ea8dd6bf-8417-4740-90cb-3ee84da12ecc 956 0 2025-09-12 23:58:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 goldmane-54d579b49d-vfdq2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic0df6ca3983 [] [] }} ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.459 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.507 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" HandleID="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.507 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" HandleID="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-f526684106", "pod":"goldmane-54d579b49d-vfdq2", "timestamp":"2025-09-12 23:58:57.507797587 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.508 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.553 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.553 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.616 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.631 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.641 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.645 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.649 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.649 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.654 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.661 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.670 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.67/26] block=192.168.96.64/26 handle="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.671 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.67/26] handle="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.671 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:57.722723 containerd[1466]: 2025-09-12 23:58:57.671 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.67/26] IPv6=[] ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" HandleID="k8s-pod-network.14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.675 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ea8dd6bf-8417-4740-90cb-3ee84da12ecc", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"goldmane-54d579b49d-vfdq2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0df6ca3983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.675 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.67/32] ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.675 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0df6ca3983 ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.682 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.685 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ea8dd6bf-8417-4740-90cb-3ee84da12ecc", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb", Pod:"goldmane-54d579b49d-vfdq2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0df6ca3983", MAC:"1e:62:a1:01:06:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:57.724353 containerd[1466]: 2025-09-12 23:58:57.716 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb" Namespace="calico-system" Pod="goldmane-54d579b49d-vfdq2" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:58:57.747439 containerd[1466]: time="2025-09-12T23:58:57.746771616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:57.747439 containerd[1466]: time="2025-09-12T23:58:57.746835296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:57.747439 containerd[1466]: time="2025-09-12T23:58:57.746850896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:57.747439 containerd[1466]: time="2025-09-12T23:58:57.746955135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:57.773860 systemd[1]: Started cri-containerd-14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb.scope - libcontainer container 14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb. Sep 12 23:58:57.845250 containerd[1466]: time="2025-09-12T23:58:57.845200875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vfdq2,Uid:ea8dd6bf-8417-4740-90cb-3ee84da12ecc,Namespace:calico-system,Attempt:1,} returns sandbox id \"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb\"" Sep 12 23:58:58.160507 containerd[1466]: time="2025-09-12T23:58:58.158797433Z" level=info msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" Sep 12 23:58:58.160507 containerd[1466]: time="2025-09-12T23:58:58.159435349Z" level=info msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.237 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.237 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" iface="eth0" netns="/var/run/netns/cni-a4ced926-b849-9261-c85c-29fab8168a0d" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.239 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" iface="eth0" netns="/var/run/netns/cni-a4ced926-b849-9261-c85c-29fab8168a0d" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.240 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" iface="eth0" netns="/var/run/netns/cni-a4ced926-b849-9261-c85c-29fab8168a0d" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.240 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.240 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.277 [INFO][4457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.277 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.277 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.292 [WARNING][4457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.292 [INFO][4457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.296 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:58.302327 containerd[1466]: 2025-09-12 23:58:58.298 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:58:58.303944 containerd[1466]: time="2025-09-12T23:58:58.302583593Z" level=info msg="TearDown network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" successfully" Sep 12 23:58:58.303944 containerd[1466]: time="2025-09-12T23:58:58.302670193Z" level=info msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" returns successfully" Sep 12 23:58:58.304841 containerd[1466]: time="2025-09-12T23:58:58.304801580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd2gt,Uid:0c4685c7-dc20-48ea-a187-79c68ef78d4c,Namespace:calico-system,Attempt:1,}" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.246 [INFO][4444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.246 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" iface="eth0" netns="/var/run/netns/cni-0626a41a-a3f2-bb42-fa8f-58138daaa1bc" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.248 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" iface="eth0" netns="/var/run/netns/cni-0626a41a-a3f2-bb42-fa8f-58138daaa1bc" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.251 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" iface="eth0" netns="/var/run/netns/cni-0626a41a-a3f2-bb42-fa8f-58138daaa1bc" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.251 [INFO][4444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.251 [INFO][4444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.297 [INFO][4462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.297 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.297 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.311 [WARNING][4462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.311 [INFO][4462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.313 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:58.319388 containerd[1466]: 2025-09-12 23:58:58.315 [INFO][4444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:58:58.331110 containerd[1466]: time="2025-09-12T23:58:58.330839988Z" level=info msg="TearDown network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" successfully" Sep 12 23:58:58.331110 containerd[1466]: time="2025-09-12T23:58:58.330909628Z" level=info msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" returns successfully" Sep 12 23:58:58.341901 containerd[1466]: time="2025-09-12T23:58:58.341531166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lp4ss,Uid:dfd82b01-39de-4359-8408-91804a116783,Namespace:kube-system,Attempt:1,}" Sep 12 23:58:58.352093 systemd[1]: run-netns-cni\x2da4ced926\x2db849\x2d9261\x2dc85c\x2d29fab8168a0d.mount: Deactivated successfully. Sep 12 23:58:58.352219 systemd[1]: run-netns-cni\x2d0626a41a\x2da3f2\x2dbb42\x2dfa8f\x2d58138daaa1bc.mount: Deactivated successfully. Sep 12 23:58:58.541634 systemd-networkd[1376]: cali2d8e109accf: Link UP Sep 12 23:58:58.543099 systemd-networkd[1376]: cali2d8e109accf: Gained carrier Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.391 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0 csi-node-driver- calico-system 0c4685c7-dc20-48ea-a187-79c68ef78d4c 970 0 2025-09-12 23:58:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 csi-node-driver-fd2gt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2d8e109accf [] [] }} ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.392 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.435 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" HandleID="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.436 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" HandleID="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-f526684106", "pod":"csi-node-driver-fd2gt", "timestamp":"2025-09-12 23:58:58.435838055 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.436 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.437 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.437 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.451 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.462 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.470 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.475 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.484 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.484 [INFO][4493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.489 [INFO][4493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92 Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.501 [INFO][4493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.522 [INFO][4493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.68/26] block=192.168.96.64/26 handle="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.522 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.68/26] handle="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.523 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:58.570029 containerd[1466]: 2025-09-12 23:58:58.523 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.68/26] IPv6=[] ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" HandleID="k8s-pod-network.33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.530 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c4685c7-dc20-48ea-a187-79c68ef78d4c", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"csi-node-driver-fd2gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e109accf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.530 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.68/32] ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.530 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d8e109accf ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.543 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.545 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c4685c7-dc20-48ea-a187-79c68ef78d4c", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92", Pod:"csi-node-driver-fd2gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e109accf", MAC:"32:50:48:a0:b9:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:58.571284 containerd[1466]: 2025-09-12 23:58:58.563 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92" Namespace="calico-system" Pod="csi-node-driver-fd2gt" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:58:58.604742 containerd[1466]: time="2025-09-12T23:58:58.604489469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:58.604742 containerd[1466]: time="2025-09-12T23:58:58.604589589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:58.604742 containerd[1466]: time="2025-09-12T23:58:58.604648548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:58.606304 containerd[1466]: time="2025-09-12T23:58:58.606042300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:58.625282 systemd-networkd[1376]: cali223b0e3c828: Link UP Sep 12 23:58:58.626146 systemd-networkd[1376]: cali223b0e3c828: Gained carrier Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.448 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0 coredns-674b8bbfcf- kube-system dfd82b01-39de-4359-8408-91804a116783 971 0 2025-09-12 23:58:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 coredns-674b8bbfcf-lp4ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali223b0e3c828 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.448 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.519 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" HandleID="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.519 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" HandleID="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-f526684106", "pod":"coredns-674b8bbfcf-lp4ss", "timestamp":"2025-09-12 23:58:58.519059248 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.519 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.523 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.523 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.552 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.564 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.575 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.578 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.584 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.584 [INFO][4502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.588 [INFO][4502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.597 [INFO][4502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.612 [INFO][4502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.69/26] block=192.168.96.64/26 handle="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.612 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.69/26] handle="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.612 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:58.668488 containerd[1466]: 2025-09-12 23:58:58.614 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.69/26] IPv6=[] ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" HandleID="k8s-pod-network.a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.621 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfd82b01-39de-4359-8408-91804a116783", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"coredns-674b8bbfcf-lp4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali223b0e3c828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.622 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.69/32] ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.622 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali223b0e3c828 ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.626 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.628 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfd82b01-39de-4359-8408-91804a116783", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af", Pod:"coredns-674b8bbfcf-lp4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali223b0e3c828", MAC:"de:5b:de:36:aa:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:58.669372 containerd[1466]: 2025-09-12 23:58:58.652 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af" Namespace="kube-system" Pod="coredns-674b8bbfcf-lp4ss" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:58:58.680230 systemd[1]: Started cri-containerd-33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92.scope - libcontainer container 33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92. Sep 12 23:58:58.729400 containerd[1466]: time="2025-09-12T23:58:58.729306740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:58.729558 containerd[1466]: time="2025-09-12T23:58:58.729374460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:58.729558 containerd[1466]: time="2025-09-12T23:58:58.729386980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:58.732962 containerd[1466]: time="2025-09-12T23:58:58.730072256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:58.738362 containerd[1466]: time="2025-09-12T23:58:58.737948090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd2gt,Uid:0c4685c7-dc20-48ea-a187-79c68ef78d4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92\"" Sep 12 23:58:58.753844 systemd[1]: Started cri-containerd-a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af.scope - libcontainer container a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af. Sep 12 23:58:58.795688 containerd[1466]: time="2025-09-12T23:58:58.792865889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lp4ss,Uid:dfd82b01-39de-4359-8408-91804a116783,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af\"" Sep 12 23:58:58.808229 containerd[1466]: time="2025-09-12T23:58:58.808184279Z" level=info msg="CreateContainer within sandbox \"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:58:58.841975 containerd[1466]: time="2025-09-12T23:58:58.841782443Z" level=info msg="CreateContainer within sandbox \"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b06c3c77772fd85f9ffe38866b834779e4b1f81de44bb2dc2d134eaded54c72\"" Sep 12 23:58:58.843203 containerd[1466]: time="2025-09-12T23:58:58.842588838Z" level=info msg="StartContainer for \"7b06c3c77772fd85f9ffe38866b834779e4b1f81de44bb2dc2d134eaded54c72\"" Sep 12 23:58:58.880873 systemd[1]: Started cri-containerd-7b06c3c77772fd85f9ffe38866b834779e4b1f81de44bb2dc2d134eaded54c72.scope - libcontainer container 7b06c3c77772fd85f9ffe38866b834779e4b1f81de44bb2dc2d134eaded54c72. Sep 12 23:58:58.915392 containerd[1466]: time="2025-09-12T23:58:58.915058175Z" level=info msg="StartContainer for \"7b06c3c77772fd85f9ffe38866b834779e4b1f81de44bb2dc2d134eaded54c72\" returns successfully" Sep 12 23:58:59.162751 containerd[1466]: time="2025-09-12T23:58:59.162320940Z" level=info msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" Sep 12 23:58:59.166948 containerd[1466]: time="2025-09-12T23:58:59.166145517Z" level=info msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" Sep 12 23:58:59.191573 systemd-networkd[1376]: cali9320638a2fe: Gained IPv6LL Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.304 [INFO][4666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.304 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" iface="eth0" netns="/var/run/netns/cni-b292a526-e9e7-1218-5cd7-45cff9cf70f5" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.304 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" iface="eth0" netns="/var/run/netns/cni-b292a526-e9e7-1218-5cd7-45cff9cf70f5" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.308 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" iface="eth0" netns="/var/run/netns/cni-b292a526-e9e7-1218-5cd7-45cff9cf70f5" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.308 [INFO][4666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.308 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.352 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.352 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.352 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.366 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.366 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.370 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:59.380065 containerd[1466]: 2025-09-12 23:58:59.374 [INFO][4666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:58:59.383442 containerd[1466]: time="2025-09-12T23:58:59.380759516Z" level=info msg="TearDown network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" successfully" Sep 12 23:58:59.383536 containerd[1466]: time="2025-09-12T23:58:59.382972223Z" level=info msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" returns successfully" Sep 12 23:58:59.384646 systemd[1]: run-netns-cni\x2db292a526\x2de9e7\x2d1218\x2d5cd7\x2d45cff9cf70f5.mount: Deactivated successfully. Sep 12 23:58:59.387472 containerd[1466]: time="2025-09-12T23:58:59.387346198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-b4ll8,Uid:2850e962-1009-4f88-9096-205a9a322adb,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.324 [INFO][4667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.325 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" iface="eth0" netns="/var/run/netns/cni-e3782e39-105d-b05d-7413-989c90cb8a58" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.325 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" iface="eth0" netns="/var/run/netns/cni-e3782e39-105d-b05d-7413-989c90cb8a58" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.327 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" iface="eth0" netns="/var/run/netns/cni-e3782e39-105d-b05d-7413-989c90cb8a58" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.328 [INFO][4667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.328 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.393 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.393 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.393 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.423 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.423 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.426 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:59.439521 containerd[1466]: 2025-09-12 23:58:59.431 [INFO][4667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:58:59.442368 containerd[1466]: time="2025-09-12T23:58:59.441286846Z" level=info msg="TearDown network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" successfully" Sep 12 23:58:59.442368 containerd[1466]: time="2025-09-12T23:58:59.441337686Z" level=info msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" returns successfully" Sep 12 23:58:59.445686 containerd[1466]: time="2025-09-12T23:58:59.444491628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mjz4,Uid:476a73de-395b-42d1-aef0-13cde14fd9c8,Namespace:kube-system,Attempt:1,}" Sep 12 23:58:59.445313 systemd[1]: run-netns-cni\x2de3782e39\x2d105d\x2db05d\x2d7413\x2d989c90cb8a58.mount: Deactivated successfully. Sep 12 23:58:59.511270 systemd-networkd[1376]: calic0df6ca3983: Gained IPv6LL Sep 12 23:58:59.543550 kubelet[2604]: I0912 23:58:59.543471 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lp4ss" podStartSLOduration=39.543451095 podStartE2EDuration="39.543451095s" podCreationTimestamp="2025-09-12 23:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:59.539330279 +0000 UTC m=+46.531016528" watchObservedRunningTime="2025-09-12 23:58:59.543451095 +0000 UTC m=+46.535137344" Sep 12 23:58:59.726564 systemd-networkd[1376]: cali22f43121028: Link UP Sep 12 23:58:59.726767 systemd-networkd[1376]: cali22f43121028: Gained carrier Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.500 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0 calico-apiserver-bb8b78f84- calico-apiserver 2850e962-1009-4f88-9096-205a9a322adb 987 0 2025-09-12 23:58:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bb8b78f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 calico-apiserver-bb8b78f84-b4ll8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22f43121028 [] [] }} ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.500 [INFO][4692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.616 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" HandleID="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.616 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" HandleID="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-f526684106", "pod":"calico-apiserver-bb8b78f84-b4ll8", "timestamp":"2025-09-12 23:58:59.615955316 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.616 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.618 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.618 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.640 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.655 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.671 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.677 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.683 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.683 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.687 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.695 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.707 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.70/26] block=192.168.96.64/26 handle="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.707 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.70/26] handle="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.707 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:59.751065 containerd[1466]: 2025-09-12 23:58:59.707 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.70/26] IPv6=[] ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" HandleID="k8s-pod-network.0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.713 [INFO][4692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2850e962-1009-4f88-9096-205a9a322adb", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"calico-apiserver-bb8b78f84-b4ll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f43121028", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.717 [INFO][4692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.70/32] ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.717 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f43121028 ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.725 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.725 [INFO][4692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2850e962-1009-4f88-9096-205a9a322adb", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab", Pod:"calico-apiserver-bb8b78f84-b4ll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f43121028", MAC:"16:6e:c9:8b:cb:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:59.751673 containerd[1466]: 2025-09-12 23:58:59.744 [INFO][4692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-b4ll8" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:58:59.818904 containerd[1466]: time="2025-09-12T23:58:59.817941707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:59.818904 containerd[1466]: time="2025-09-12T23:58:59.818011907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:59.818904 containerd[1466]: time="2025-09-12T23:58:59.818027827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:59.818904 containerd[1466]: time="2025-09-12T23:58:59.818121426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:59.848502 systemd-networkd[1376]: cali09df93d5943: Link UP Sep 12 23:58:59.849966 systemd-networkd[1376]: cali09df93d5943: Gained carrier Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.571 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0 coredns-674b8bbfcf- kube-system 476a73de-395b-42d1-aef0-13cde14fd9c8 988 0 2025-09-12 23:58:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 coredns-674b8bbfcf-4mjz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09df93d5943 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.573 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.674 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" HandleID="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.674 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" HandleID="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b970), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-f526684106", "pod":"coredns-674b8bbfcf-4mjz4", "timestamp":"2025-09-12 23:58:59.67398578 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.674 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.707 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.708 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.749 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.764 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.780 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.785 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.795 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.795 [INFO][4724] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.800 [INFO][4724] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431 Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.816 [INFO][4724] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.832 [INFO][4724] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.71/26] block=192.168.96.64/26 handle="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.832 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.71/26] handle="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" host="ci-4081-3-5-n-f526684106" Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.832 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:58:59.883581 containerd[1466]: 2025-09-12 23:58:59.832 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.71/26] IPv6=[] ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" HandleID="k8s-pod-network.4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.838 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"476a73de-395b-42d1-aef0-13cde14fd9c8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"coredns-674b8bbfcf-4mjz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09df93d5943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.840 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.71/32] ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.841 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09df93d5943 ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.851 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.851 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"476a73de-395b-42d1-aef0-13cde14fd9c8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431", Pod:"coredns-674b8bbfcf-4mjz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09df93d5943", MAC:"5e:a3:ce:bb:32:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:58:59.884461 containerd[1466]: 2025-09-12 23:58:59.873 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mjz4" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:58:59.886932 systemd[1]: Started cri-containerd-0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab.scope - libcontainer container 0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab. Sep 12 23:58:59.923269 containerd[1466]: time="2025-09-12T23:58:59.922642622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:59.923269 containerd[1466]: time="2025-09-12T23:58:59.922722661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:59.923269 containerd[1466]: time="2025-09-12T23:58:59.922738541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:59.923269 containerd[1466]: time="2025-09-12T23:58:59.922906420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:59.952733 systemd[1]: Started cri-containerd-4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431.scope - libcontainer container 4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431. Sep 12 23:58:59.960155 systemd-networkd[1376]: cali2d8e109accf: Gained IPv6LL Sep 12 23:59:00.050078 containerd[1466]: time="2025-09-12T23:59:00.049959928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-b4ll8,Uid:2850e962-1009-4f88-9096-205a9a322adb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab\"" Sep 12 23:59:00.062522 containerd[1466]: time="2025-09-12T23:59:00.062140298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mjz4,Uid:476a73de-395b-42d1-aef0-13cde14fd9c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431\"" Sep 12 23:59:00.073932 containerd[1466]: time="2025-09-12T23:59:00.073867311Z" level=info msg="CreateContainer within sandbox \"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:59:00.087643 systemd-networkd[1376]: cali223b0e3c828: Gained IPv6LL Sep 12 23:59:00.100345 containerd[1466]: time="2025-09-12T23:59:00.099844282Z" level=info msg="CreateContainer within sandbox \"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87e00edf8eb88870c0fd8dc04924cee995c0a5026677fdaad4dfd330ac25225f\"" Sep 12 23:59:00.103806 containerd[1466]: time="2025-09-12T23:59:00.103753980Z" level=info msg="StartContainer for \"87e00edf8eb88870c0fd8dc04924cee995c0a5026677fdaad4dfd330ac25225f\"" Sep 12 23:59:00.151144 systemd[1]: Started cri-containerd-87e00edf8eb88870c0fd8dc04924cee995c0a5026677fdaad4dfd330ac25225f.scope - libcontainer container 87e00edf8eb88870c0fd8dc04924cee995c0a5026677fdaad4dfd330ac25225f. Sep 12 23:59:00.160506 containerd[1466]: time="2025-09-12T23:59:00.159673179Z" level=info msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" Sep 12 23:59:00.211188 containerd[1466]: time="2025-09-12T23:59:00.211051925Z" level=info msg="StartContainer for \"87e00edf8eb88870c0fd8dc04924cee995c0a5026677fdaad4dfd330ac25225f\" returns successfully" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.310 [INFO][4875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.310 [INFO][4875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" iface="eth0" netns="/var/run/netns/cni-0c4deee5-7e0e-ee47-7cce-7f0ec12e726b" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.311 [INFO][4875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" iface="eth0" netns="/var/run/netns/cni-0c4deee5-7e0e-ee47-7cce-7f0ec12e726b" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.312 [INFO][4875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" iface="eth0" netns="/var/run/netns/cni-0c4deee5-7e0e-ee47-7cce-7f0ec12e726b" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.312 [INFO][4875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.312 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.358 [INFO][4896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.358 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.358 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.373 [WARNING][4896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.373 [INFO][4896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.375 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:00.383055 containerd[1466]: 2025-09-12 23:59:00.378 [INFO][4875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:00.386394 containerd[1466]: time="2025-09-12T23:59:00.383649976Z" level=info msg="TearDown network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" successfully" Sep 12 23:59:00.386394 containerd[1466]: time="2025-09-12T23:59:00.383719096Z" level=info msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" returns successfully" Sep 12 23:59:00.387736 systemd[1]: run-netns-cni\x2d0c4deee5\x2d7e0e\x2dee47\x2d7cce\x2d7f0ec12e726b.mount: Deactivated successfully. Sep 12 23:59:00.390760 containerd[1466]: time="2025-09-12T23:59:00.390709616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-8mxsb,Uid:4f365313-855d-43b4-a047-74e3275b3496,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:59:00.562655 kubelet[2604]: I0912 23:59:00.562531 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4mjz4" podStartSLOduration=39.562508031 podStartE2EDuration="39.562508031s" podCreationTimestamp="2025-09-12 23:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:59:00.56095876 +0000 UTC m=+47.552645009" watchObservedRunningTime="2025-09-12 23:59:00.562508031 +0000 UTC m=+47.554194280" Sep 12 23:59:00.683503 systemd-networkd[1376]: calia3f81b6d1e7: Link UP Sep 12 23:59:00.686751 systemd-networkd[1376]: calia3f81b6d1e7: Gained carrier Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.493 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0 calico-apiserver-bb8b78f84- calico-apiserver 4f365313-855d-43b4-a047-74e3275b3496 1008 0 2025-09-12 23:58:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bb8b78f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-f526684106 calico-apiserver-bb8b78f84-8mxsb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3f81b6d1e7 [] [] }} ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.493 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.555 [INFO][4915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" HandleID="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.556 [INFO][4915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" HandleID="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-f526684106", "pod":"calico-apiserver-bb8b78f84-8mxsb", "timestamp":"2025-09-12 23:59:00.55585919 +0000 UTC"}, Hostname:"ci-4081-3-5-n-f526684106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.556 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.556 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.556 [INFO][4915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-f526684106' Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.593 [INFO][4915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.617 [INFO][4915] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.634 [INFO][4915] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.639 [INFO][4915] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.645 [INFO][4915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.645 [INFO][4915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.649 [INFO][4915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.658 [INFO][4915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.672 [INFO][4915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.72/26] block=192.168.96.64/26 handle="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.672 [INFO][4915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.72/26] handle="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" host="ci-4081-3-5-n-f526684106" Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.672 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:00.713623 containerd[1466]: 2025-09-12 23:59:00.672 [INFO][4915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.72/26] IPv6=[] ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" HandleID="k8s-pod-network.7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.674 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f365313-855d-43b4-a047-74e3275b3496", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"", Pod:"calico-apiserver-bb8b78f84-8mxsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f81b6d1e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.675 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.72/32] ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.675 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3f81b6d1e7 ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.689 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.690 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f365313-855d-43b4-a047-74e3275b3496", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec", Pod:"calico-apiserver-bb8b78f84-8mxsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f81b6d1e7", MAC:"8e:2d:68:8d:a9:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:00.714225 containerd[1466]: 2025-09-12 23:59:00.708 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec" Namespace="calico-apiserver" Pod="calico-apiserver-bb8b78f84-8mxsb" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:00.761968 containerd[1466]: time="2025-09-12T23:59:00.761497451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:00.761968 containerd[1466]: time="2025-09-12T23:59:00.761659010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:00.761968 containerd[1466]: time="2025-09-12T23:59:00.761687770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:00.761968 containerd[1466]: time="2025-09-12T23:59:00.761830449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:00.822143 systemd[1]: Started cri-containerd-7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec.scope - libcontainer container 7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec. Sep 12 23:59:00.876640 containerd[1466]: time="2025-09-12T23:59:00.876487032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bb8b78f84-8mxsb,Uid:4f365313-855d-43b4-a047-74e3275b3496,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec\"" Sep 12 23:59:00.902245 containerd[1466]: time="2025-09-12T23:59:00.901245691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:00.902245 containerd[1466]: time="2025-09-12T23:59:00.902178525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:59:00.903168 containerd[1466]: time="2025-09-12T23:59:00.903109280Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:00.907549 containerd[1466]: time="2025-09-12T23:59:00.907150177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:00.908408 containerd[1466]: time="2025-09-12T23:59:00.908337810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.190441583s" Sep 12 23:59:00.908408 containerd[1466]: time="2025-09-12T23:59:00.908397490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:59:00.911231 containerd[1466]: time="2025-09-12T23:59:00.911173714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:59:00.921165 containerd[1466]: time="2025-09-12T23:59:00.921100857Z" level=info msg="CreateContainer within sandbox \"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:59:00.940416 containerd[1466]: time="2025-09-12T23:59:00.940284987Z" level=info msg="CreateContainer within sandbox \"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff\"" Sep 12 23:59:00.942768 containerd[1466]: time="2025-09-12T23:59:00.942554094Z" level=info msg="StartContainer for \"e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff\"" Sep 12 23:59:00.969834 systemd[1]: Started cri-containerd-e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff.scope - libcontainer container e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff. Sep 12 23:59:01.016906 containerd[1466]: time="2025-09-12T23:59:01.016752070Z" level=info msg="StartContainer for \"e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff\" returns successfully" Sep 12 23:59:01.303082 systemd-networkd[1376]: cali22f43121028: Gained IPv6LL Sep 12 23:59:01.303506 systemd-networkd[1376]: cali09df93d5943: Gained IPv6LL Sep 12 23:59:01.647109 kubelet[2604]: I0912 23:59:01.645731 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68cf785d8c-6sttq" podStartSLOduration=21.452374771 podStartE2EDuration="24.645710618s" podCreationTimestamp="2025-09-12 23:58:37 +0000 UTC" firstStartedPulling="2025-09-12 23:58:57.716829313 +0000 UTC m=+44.708515562" lastFinishedPulling="2025-09-12 23:59:00.91016516 +0000 UTC m=+47.901851409" observedRunningTime="2025-09-12 23:59:01.578755838 +0000 UTC m=+48.570442087" watchObservedRunningTime="2025-09-12 23:59:01.645710618 +0000 UTC m=+48.637396867" Sep 12 23:59:02.455559 systemd-networkd[1376]: calia3f81b6d1e7: Gained IPv6LL Sep 12 23:59:02.801158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006803621.mount: Deactivated successfully. Sep 12 23:59:03.295287 containerd[1466]: time="2025-09-12T23:59:03.295153527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:03.296816 containerd[1466]: time="2025-09-12T23:59:03.296757478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:59:03.297638 containerd[1466]: time="2025-09-12T23:59:03.297567513Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:03.301043 containerd[1466]: time="2025-09-12T23:59:03.300981534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:03.302103 containerd[1466]: time="2025-09-12T23:59:03.301961409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.390731936s" Sep 12 23:59:03.302103 containerd[1466]: time="2025-09-12T23:59:03.302002169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:59:03.305655 containerd[1466]: time="2025-09-12T23:59:03.305201151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:59:03.307865 containerd[1466]: time="2025-09-12T23:59:03.307809056Z" level=info msg="CreateContainer within sandbox \"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:59:03.325403 containerd[1466]: time="2025-09-12T23:59:03.325268839Z" level=info msg="CreateContainer within sandbox \"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd\"" Sep 12 23:59:03.326455 containerd[1466]: time="2025-09-12T23:59:03.326412272Z" level=info msg="StartContainer for \"cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd\"" Sep 12 23:59:03.370915 systemd[1]: Started cri-containerd-cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd.scope - libcontainer container cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd. Sep 12 23:59:03.415868 containerd[1466]: time="2025-09-12T23:59:03.415559934Z" level=info msg="StartContainer for \"cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd\" returns successfully" Sep 12 23:59:03.593008 kubelet[2604]: I0912 23:59:03.590169 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-vfdq2" podStartSLOduration=20.133990141 podStartE2EDuration="25.590147119s" podCreationTimestamp="2025-09-12 23:58:38 +0000 UTC" firstStartedPulling="2025-09-12 23:58:57.847329942 +0000 UTC m=+44.839016191" lastFinishedPulling="2025-09-12 23:59:03.30348692 +0000 UTC m=+50.295173169" observedRunningTime="2025-09-12 23:59:03.586544339 +0000 UTC m=+50.578230588" watchObservedRunningTime="2025-09-12 23:59:03.590147119 +0000 UTC m=+50.581833368" Sep 12 23:59:04.742302 containerd[1466]: time="2025-09-12T23:59:04.742221599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:04.743804 containerd[1466]: time="2025-09-12T23:59:04.743456755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:59:04.745842 containerd[1466]: time="2025-09-12T23:59:04.744697990Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:04.747530 containerd[1466]: time="2025-09-12T23:59:04.747486500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:04.748421 containerd[1466]: time="2025-09-12T23:59:04.748382296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.443134705s" Sep 12 23:59:04.748547 containerd[1466]: time="2025-09-12T23:59:04.748530136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:59:04.751071 containerd[1466]: time="2025-09-12T23:59:04.751016767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:59:04.758811 containerd[1466]: time="2025-09-12T23:59:04.758760658Z" level=info msg="CreateContainer within sandbox \"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:59:04.782770 containerd[1466]: time="2025-09-12T23:59:04.782585409Z" level=info msg="CreateContainer within sandbox \"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25\"" Sep 12 23:59:04.785917 containerd[1466]: time="2025-09-12T23:59:04.784292363Z" level=info msg="StartContainer for \"91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25\"" Sep 12 23:59:04.875992 systemd[1]: Started cri-containerd-91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25.scope - libcontainer container 91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25. Sep 12 23:59:04.911401 containerd[1466]: time="2025-09-12T23:59:04.911296972Z" level=info msg="StartContainer for \"91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25\" returns successfully" Sep 12 23:59:05.617449 systemd[1]: run-containerd-runc-k8s.io-91046d1b67a291ca957200bc4cf0a2b4a59b49dc22782b7ead555ad762a70e25-runc.ZL7np0.mount: Deactivated successfully. Sep 12 23:59:06.707046 containerd[1466]: time="2025-09-12T23:59:06.705443108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.707046 containerd[1466]: time="2025-09-12T23:59:06.706772163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:59:06.707046 containerd[1466]: time="2025-09-12T23:59:06.706952045Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.710488 containerd[1466]: time="2025-09-12T23:59:06.710427924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.714022 containerd[1466]: time="2025-09-12T23:59:06.713946683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.962879037s" Sep 12 23:59:06.714022 containerd[1466]: time="2025-09-12T23:59:06.714008644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:59:06.720092 containerd[1466]: time="2025-09-12T23:59:06.720044431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:59:06.723294 containerd[1466]: time="2025-09-12T23:59:06.723244867Z" level=info msg="CreateContainer within sandbox \"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:59:06.754926 containerd[1466]: time="2025-09-12T23:59:06.754840659Z" level=info msg="CreateContainer within sandbox \"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3169f620ac4717f0694b85237305dedb3c186926074405e7c5305eddf952c0c5\"" Sep 12 23:59:06.757334 containerd[1466]: time="2025-09-12T23:59:06.755432306Z" level=info msg="StartContainer for \"3169f620ac4717f0694b85237305dedb3c186926074405e7c5305eddf952c0c5\"" Sep 12 23:59:06.815827 systemd[1]: Started cri-containerd-3169f620ac4717f0694b85237305dedb3c186926074405e7c5305eddf952c0c5.scope - libcontainer container 3169f620ac4717f0694b85237305dedb3c186926074405e7c5305eddf952c0c5. Sep 12 23:59:06.857508 containerd[1466]: time="2025-09-12T23:59:06.857059160Z" level=info msg="StartContainer for \"3169f620ac4717f0694b85237305dedb3c186926074405e7c5305eddf952c0c5\" returns successfully" Sep 12 23:59:07.120533 containerd[1466]: time="2025-09-12T23:59:07.120124163Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:07.122189 containerd[1466]: time="2025-09-12T23:59:07.121928502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:59:07.125583 containerd[1466]: time="2025-09-12T23:59:07.125465980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 405.371428ms" Sep 12 23:59:07.125583 containerd[1466]: time="2025-09-12T23:59:07.125517620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:59:07.130244 containerd[1466]: time="2025-09-12T23:59:07.129817626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:59:07.139596 containerd[1466]: time="2025-09-12T23:59:07.139407369Z" level=info msg="CreateContainer within sandbox \"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:59:07.162508 containerd[1466]: time="2025-09-12T23:59:07.161626607Z" level=info msg="CreateContainer within sandbox \"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee8baf515324217869044a8915b71dcdbaab2fc4c9f6762686b1f8d257a6d071\"" Sep 12 23:59:07.168951 containerd[1466]: time="2025-09-12T23:59:07.168706563Z" level=info msg="StartContainer for \"ee8baf515324217869044a8915b71dcdbaab2fc4c9f6762686b1f8d257a6d071\"" Sep 12 23:59:07.213182 systemd[1]: Started cri-containerd-ee8baf515324217869044a8915b71dcdbaab2fc4c9f6762686b1f8d257a6d071.scope - libcontainer container ee8baf515324217869044a8915b71dcdbaab2fc4c9f6762686b1f8d257a6d071. Sep 12 23:59:07.280744 containerd[1466]: time="2025-09-12T23:59:07.280665523Z" level=info msg="StartContainer for \"ee8baf515324217869044a8915b71dcdbaab2fc4c9f6762686b1f8d257a6d071\" returns successfully" Sep 12 23:59:07.608655 kubelet[2604]: I0912 23:59:07.608563 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bb8b78f84-b4ll8" podStartSLOduration=29.943974537 podStartE2EDuration="36.608545915s" podCreationTimestamp="2025-09-12 23:58:31 +0000 UTC" firstStartedPulling="2025-09-12 23:59:00.053338989 +0000 UTC m=+47.045025198" lastFinishedPulling="2025-09-12 23:59:06.717910327 +0000 UTC m=+53.709596576" observedRunningTime="2025-09-12 23:59:07.607408583 +0000 UTC m=+54.599094832" watchObservedRunningTime="2025-09-12 23:59:07.608545915 +0000 UTC m=+54.600232164" Sep 12 23:59:08.600550 kubelet[2604]: I0912 23:59:08.600474 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:59:08.789911 containerd[1466]: time="2025-09-12T23:59:08.789840911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.792149 containerd[1466]: time="2025-09-12T23:59:08.792091814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:59:08.793549 containerd[1466]: time="2025-09-12T23:59:08.793447948Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.797464 containerd[1466]: time="2025-09-12T23:59:08.797400949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.799069 containerd[1466]: time="2025-09-12T23:59:08.799001725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.669005497s" Sep 12 23:59:08.799069 containerd[1466]: time="2025-09-12T23:59:08.799062166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:59:08.807733 containerd[1466]: time="2025-09-12T23:59:08.807677974Z" level=info msg="CreateContainer within sandbox \"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:59:08.829257 containerd[1466]: time="2025-09-12T23:59:08.829204716Z" level=info msg="CreateContainer within sandbox \"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"258c5d78a9fb6393d843bd576c797c261390c37be7792b24f2d85b9af043d8c6\"" Sep 12 23:59:08.831021 containerd[1466]: time="2025-09-12T23:59:08.830971254Z" level=info msg="StartContainer for \"258c5d78a9fb6393d843bd576c797c261390c37be7792b24f2d85b9af043d8c6\"" Sep 12 23:59:08.898195 systemd[1]: Started cri-containerd-258c5d78a9fb6393d843bd576c797c261390c37be7792b24f2d85b9af043d8c6.scope - libcontainer container 258c5d78a9fb6393d843bd576c797c261390c37be7792b24f2d85b9af043d8c6. Sep 12 23:59:08.961707 containerd[1466]: time="2025-09-12T23:59:08.961652718Z" level=info msg="StartContainer for \"258c5d78a9fb6393d843bd576c797c261390c37be7792b24f2d85b9af043d8c6\" returns successfully" Sep 12 23:59:09.293365 kubelet[2604]: I0912 23:59:09.293317 2604 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:59:09.299141 kubelet[2604]: I0912 23:59:09.299077 2604 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:59:09.607771 kubelet[2604]: I0912 23:59:09.607308 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:59:09.607908 kubelet[2604]: I0912 23:59:09.607772 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:59:09.623207 kubelet[2604]: I0912 23:59:09.623130 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bb8b78f84-8mxsb" podStartSLOduration=32.37485659 podStartE2EDuration="38.623096619s" podCreationTimestamp="2025-09-12 23:58:31 +0000 UTC" firstStartedPulling="2025-09-12 23:59:00.879528135 +0000 UTC m=+47.871214384" lastFinishedPulling="2025-09-12 23:59:07.127768164 +0000 UTC m=+54.119454413" observedRunningTime="2025-09-12 23:59:07.631508601 +0000 UTC m=+54.623194930" watchObservedRunningTime="2025-09-12 23:59:09.623096619 +0000 UTC m=+56.614782868" Sep 12 23:59:09.623485 kubelet[2604]: I0912 23:59:09.623455 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fd2gt" podStartSLOduration=22.563525606 podStartE2EDuration="32.623449223s" podCreationTimestamp="2025-09-12 23:58:37 +0000 UTC" firstStartedPulling="2025-09-12 23:58:58.739985398 +0000 UTC m=+45.731671647" lastFinishedPulling="2025-09-12 23:59:08.799909015 +0000 UTC m=+55.791595264" observedRunningTime="2025-09-12 23:59:09.623302382 +0000 UTC m=+56.614988631" watchObservedRunningTime="2025-09-12 23:59:09.623449223 +0000 UTC m=+56.615135472" Sep 12 23:59:13.164074 containerd[1466]: time="2025-09-12T23:59:13.163994004Z" level=info msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.227 [WARNING][5334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f365313-855d-43b4-a047-74e3275b3496", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec", Pod:"calico-apiserver-bb8b78f84-8mxsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f81b6d1e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.227 [INFO][5334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.227 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" iface="eth0" netns="" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.227 [INFO][5334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.227 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.308 [INFO][5344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.308 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.308 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.329 [WARNING][5344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.329 [INFO][5344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.345 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.354029 containerd[1466]: 2025-09-12 23:59:13.350 [INFO][5334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.354029 containerd[1466]: time="2025-09-12T23:59:13.353858864Z" level=info msg="TearDown network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" successfully" Sep 12 23:59:13.354029 containerd[1466]: time="2025-09-12T23:59:13.353906105Z" level=info msg="StopPodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" returns successfully" Sep 12 23:59:13.354668 containerd[1466]: time="2025-09-12T23:59:13.354638711Z" level=info msg="RemovePodSandbox for \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" Sep 12 23:59:13.354696 containerd[1466]: time="2025-09-12T23:59:13.354683111Z" level=info msg="Forcibly stopping sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\"" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.433 [WARNING][5358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f365313-855d-43b4-a047-74e3275b3496", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"7c8c2d2fe53e89c2ce34a348cbc8581d9a0efe8c39e0889e5f29a7077324cbec", Pod:"calico-apiserver-bb8b78f84-8mxsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f81b6d1e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.434 [INFO][5358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.434 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" iface="eth0" netns="" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.434 [INFO][5358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.434 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.484 [INFO][5366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.484 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.484 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.502 [WARNING][5366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.503 [INFO][5366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" HandleID="k8s-pod-network.a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--8mxsb-eth0" Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.510 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.516717 containerd[1466]: 2025-09-12 23:59:13.512 [INFO][5358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a" Sep 12 23:59:13.517845 containerd[1466]: time="2025-09-12T23:59:13.517247984Z" level=info msg="TearDown network for sandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" successfully" Sep 12 23:59:13.523580 containerd[1466]: time="2025-09-12T23:59:13.523310754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:13.523580 containerd[1466]: time="2025-09-12T23:59:13.523406635Z" level=info msg="RemovePodSandbox \"a6014a260f802e3548d829e437c6a4fab717333236a9c16ced041fc07c51671a\" returns successfully" Sep 12 23:59:13.527159 containerd[1466]: time="2025-09-12T23:59:13.527085626Z" level=info msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.584 [WARNING][5384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2850e962-1009-4f88-9096-205a9a322adb", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab", Pod:"calico-apiserver-bb8b78f84-b4ll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f43121028", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.584 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.584 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" iface="eth0" netns="" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.584 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.584 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.606 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.606 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.606 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.622 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.623 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.625 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.630112 containerd[1466]: 2025-09-12 23:59:13.628 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.631095 containerd[1466]: time="2025-09-12T23:59:13.630229084Z" level=info msg="TearDown network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" successfully" Sep 12 23:59:13.631095 containerd[1466]: time="2025-09-12T23:59:13.630260724Z" level=info msg="StopPodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" returns successfully" Sep 12 23:59:13.631095 containerd[1466]: time="2025-09-12T23:59:13.630857009Z" level=info msg="RemovePodSandbox for \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" Sep 12 23:59:13.631095 containerd[1466]: time="2025-09-12T23:59:13.630903610Z" level=info msg="Forcibly stopping sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\"" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.674 [WARNING][5406] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0", GenerateName:"calico-apiserver-bb8b78f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2850e962-1009-4f88-9096-205a9a322adb", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bb8b78f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"0e66c123c64eeefced7f5822561db535ab1cf2ca40b69d3d0fd1ab64d408caab", Pod:"calico-apiserver-bb8b78f84-b4ll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f43121028", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.675 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.675 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" iface="eth0" netns="" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.675 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.675 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.697 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.697 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.697 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.712 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.713 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" HandleID="k8s-pod-network.b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Workload="ci--4081--3--5--n--f526684106-k8s-calico--apiserver--bb8b78f84--b4ll8-eth0" Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.718 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.726042 containerd[1466]: 2025-09-12 23:59:13.721 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785" Sep 12 23:59:13.726042 containerd[1466]: time="2025-09-12T23:59:13.724789151Z" level=info msg="TearDown network for sandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" successfully" Sep 12 23:59:13.755545 containerd[1466]: time="2025-09-12T23:59:13.755484606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:13.755747 containerd[1466]: time="2025-09-12T23:59:13.755685848Z" level=info msg="RemovePodSandbox \"b6f5ffc274c6578affb567b5db7cca04b69714a88f266376d4a3d99e884a3785\" returns successfully" Sep 12 23:59:13.758441 containerd[1466]: time="2025-09-12T23:59:13.758394911Z" level=info msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.814 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"476a73de-395b-42d1-aef0-13cde14fd9c8", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431", Pod:"coredns-674b8bbfcf-4mjz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09df93d5943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.814 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.814 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" iface="eth0" netns="" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.814 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.814 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.849 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.850 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.850 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.872 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.872 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.874 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.878119 containerd[1466]: 2025-09-12 23:59:13.876 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.878119 containerd[1466]: time="2025-09-12T23:59:13.877924665Z" level=info msg="TearDown network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" successfully" Sep 12 23:59:13.878119 containerd[1466]: time="2025-09-12T23:59:13.877962946Z" level=info msg="StopPodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" returns successfully" Sep 12 23:59:13.880411 containerd[1466]: time="2025-09-12T23:59:13.879843041Z" level=info msg="RemovePodSandbox for \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" Sep 12 23:59:13.880411 containerd[1466]: time="2025-09-12T23:59:13.879923002Z" level=info msg="Forcibly stopping sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\"" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.920 [WARNING][5449] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"476a73de-395b-42d1-aef0-13cde14fd9c8", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"4ee984f87831098ca97239685a5c801675fd31b6105d115754567e0a0c098431", Pod:"coredns-674b8bbfcf-4mjz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09df93d5943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.920 [INFO][5449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.920 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" iface="eth0" netns="" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.920 [INFO][5449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.920 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.944 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.945 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.945 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.956 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.957 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" HandleID="k8s-pod-network.0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--4mjz4-eth0" Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.960 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:13.964579 containerd[1466]: 2025-09-12 23:59:13.962 [INFO][5449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6" Sep 12 23:59:13.965970 containerd[1466]: time="2025-09-12T23:59:13.964762148Z" level=info msg="TearDown network for sandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" successfully" Sep 12 23:59:13.973302 containerd[1466]: time="2025-09-12T23:59:13.973248459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:13.973450 containerd[1466]: time="2025-09-12T23:59:13.973338979Z" level=info msg="RemovePodSandbox \"0ee191dd854e8af6ebd95f8abbf38928905d755a49f160b42f50af042511a6a6\" returns successfully" Sep 12 23:59:13.974944 containerd[1466]: time="2025-09-12T23:59:13.974376588Z" level=info msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.022 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c4685c7-dc20-48ea-a187-79c68ef78d4c", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92", Pod:"csi-node-driver-fd2gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e109accf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.023 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.023 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" iface="eth0" netns="" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.023 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.023 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.057 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.057 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.057 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.074 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.074 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.076 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.082501 containerd[1466]: 2025-09-12 23:59:14.080 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.083985 containerd[1466]: time="2025-09-12T23:59:14.083548827Z" level=info msg="TearDown network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" successfully" Sep 12 23:59:14.083985 containerd[1466]: time="2025-09-12T23:59:14.083657028Z" level=info msg="StopPodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" returns successfully" Sep 12 23:59:14.084845 containerd[1466]: time="2025-09-12T23:59:14.084491594Z" level=info msg="RemovePodSandbox for \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" Sep 12 23:59:14.084845 containerd[1466]: time="2025-09-12T23:59:14.084527395Z" level=info msg="Forcibly stopping sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\"" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.128 [WARNING][5492] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c4685c7-dc20-48ea-a187-79c68ef78d4c", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"33b211c7c89b623886d0e5079d483d0a0368ccd281180719efd65ec450967f92", Pod:"csi-node-driver-fd2gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e109accf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.129 [INFO][5492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.129 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" iface="eth0" netns="" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.129 [INFO][5492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.129 [INFO][5492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.153 [INFO][5499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.153 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.153 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.165 [WARNING][5499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.165 [INFO][5499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" HandleID="k8s-pod-network.664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Workload="ci--4081--3--5--n--f526684106-k8s-csi--node--driver--fd2gt-eth0" Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.168 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.171799 containerd[1466]: 2025-09-12 23:59:14.170 [INFO][5492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a" Sep 12 23:59:14.171799 containerd[1466]: time="2025-09-12T23:59:14.171752369Z" level=info msg="TearDown network for sandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" successfully" Sep 12 23:59:14.178510 containerd[1466]: time="2025-09-12T23:59:14.178273821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:14.178510 containerd[1466]: time="2025-09-12T23:59:14.178377902Z" level=info msg="RemovePodSandbox \"664db265337e2d283afa49318ca859bf0485c9ccbf29e09ce2fe290de120273a\" returns successfully" Sep 12 23:59:14.179720 containerd[1466]: time="2025-09-12T23:59:14.179346990Z" level=info msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.223 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0", GenerateName:"calico-kube-controllers-68cf785d8c-", Namespace:"calico-system", SelfLink:"", UID:"7464ec4d-b10e-40dc-b6fa-a3b3579b9606", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68cf785d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011", Pod:"calico-kube-controllers-68cf785d8c-6sttq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9320638a2fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.224 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.225 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" iface="eth0" netns="" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.225 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.225 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.251 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.251 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.252 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.269 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.269 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.271 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.275036 containerd[1466]: 2025-09-12 23:59:14.272 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.275785 containerd[1466]: time="2025-09-12T23:59:14.275572116Z" level=info msg="TearDown network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" successfully" Sep 12 23:59:14.275785 containerd[1466]: time="2025-09-12T23:59:14.275664837Z" level=info msg="StopPodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" returns successfully" Sep 12 23:59:14.276688 containerd[1466]: time="2025-09-12T23:59:14.276547204Z" level=info msg="RemovePodSandbox for \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" Sep 12 23:59:14.276688 containerd[1466]: time="2025-09-12T23:59:14.276595165Z" level=info msg="Forcibly stopping sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\"" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.317 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0", GenerateName:"calico-kube-controllers-68cf785d8c-", Namespace:"calico-system", SelfLink:"", UID:"7464ec4d-b10e-40dc-b6fa-a3b3579b9606", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68cf785d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"34f6da326cf5934093b87e246268771d8dd9b3d5950a981b9f3ef5e84ccb0011", Pod:"calico-kube-controllers-68cf785d8c-6sttq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9320638a2fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.317 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.317 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" iface="eth0" netns="" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.317 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.317 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.340 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.340 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.340 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.353 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.353 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" HandleID="k8s-pod-network.f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Workload="ci--4081--3--5--n--f526684106-k8s-calico--kube--controllers--68cf785d8c--6sttq-eth0" Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.355 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.359675 containerd[1466]: 2025-09-12 23:59:14.357 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6" Sep 12 23:59:14.360969 containerd[1466]: time="2025-09-12T23:59:14.360344552Z" level=info msg="TearDown network for sandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" successfully" Sep 12 23:59:14.371264 containerd[1466]: time="2025-09-12T23:59:14.371060197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:14.371264 containerd[1466]: time="2025-09-12T23:59:14.371148398Z" level=info msg="RemovePodSandbox \"f4af03d47ec288cd29c6474df520c02e21eb51da9665904b96adcff418aab6c6\" returns successfully" Sep 12 23:59:14.372404 containerd[1466]: time="2025-09-12T23:59:14.371994605Z" level=info msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.423 [WARNING][5556] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfd82b01-39de-4359-8408-91804a116783", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af", Pod:"coredns-674b8bbfcf-lp4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali223b0e3c828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.424 [INFO][5556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.424 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" iface="eth0" netns="" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.424 [INFO][5556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.424 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.452 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.452 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.452 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.477 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.477 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.480 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.484929 containerd[1466]: 2025-09-12 23:59:14.482 [INFO][5556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.484929 containerd[1466]: time="2025-09-12T23:59:14.484822863Z" level=info msg="TearDown network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" successfully" Sep 12 23:59:14.484929 containerd[1466]: time="2025-09-12T23:59:14.484857104Z" level=info msg="StopPodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" returns successfully" Sep 12 23:59:14.486304 containerd[1466]: time="2025-09-12T23:59:14.486247075Z" level=info msg="RemovePodSandbox for \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" Sep 12 23:59:14.486304 containerd[1466]: time="2025-09-12T23:59:14.486302875Z" level=info msg="Forcibly stopping sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\"" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.536 [WARNING][5577] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfd82b01-39de-4359-8408-91804a116783", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"a1d4a2306fcd1341641a3f832a563040364540d9a44f433dd3ce0e6746e5f2af", Pod:"coredns-674b8bbfcf-lp4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali223b0e3c828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.537 [INFO][5577] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.537 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" iface="eth0" netns="" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.537 [INFO][5577] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.537 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.563 [INFO][5585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.563 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.563 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.575 [WARNING][5585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.575 [INFO][5585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" HandleID="k8s-pod-network.4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Workload="ci--4081--3--5--n--f526684106-k8s-coredns--674b8bbfcf--lp4ss-eth0" Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.577 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.581669 containerd[1466]: 2025-09-12 23:59:14.579 [INFO][5577] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f" Sep 12 23:59:14.582229 containerd[1466]: time="2025-09-12T23:59:14.581730395Z" level=info msg="TearDown network for sandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" successfully" Sep 12 23:59:14.586447 containerd[1466]: time="2025-09-12T23:59:14.586372872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:14.586658 containerd[1466]: time="2025-09-12T23:59:14.586504313Z" level=info msg="RemovePodSandbox \"4d57119e73b99239c44c0fe0e325bfbf0a36ec631c8d80569c067a787871676f\" returns successfully" Sep 12 23:59:14.587269 containerd[1466]: time="2025-09-12T23:59:14.587212279Z" level=info msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.627 [WARNING][5599] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.627 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.627 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" iface="eth0" netns="" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.627 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.627 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.654 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.654 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.655 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.665 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.665 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.668 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.671552 containerd[1466]: 2025-09-12 23:59:14.669 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.672280 containerd[1466]: time="2025-09-12T23:59:14.671626911Z" level=info msg="TearDown network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" successfully" Sep 12 23:59:14.672280 containerd[1466]: time="2025-09-12T23:59:14.671680232Z" level=info msg="StopPodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" returns successfully" Sep 12 23:59:14.672340 containerd[1466]: time="2025-09-12T23:59:14.672315797Z" level=info msg="RemovePodSandbox for \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" Sep 12 23:59:14.672369 containerd[1466]: time="2025-09-12T23:59:14.672357877Z" level=info msg="Forcibly stopping sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\"" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.713 [WARNING][5621] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" WorkloadEndpoint="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.713 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.713 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" iface="eth0" netns="" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.713 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.713 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.738 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.738 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.738 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.751 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.751 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" HandleID="k8s-pod-network.3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Workload="ci--4081--3--5--n--f526684106-k8s-whisker--674b8c84d8--crn2g-eth0" Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.753 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.759124 containerd[1466]: 2025-09-12 23:59:14.756 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1" Sep 12 23:59:14.760176 containerd[1466]: time="2025-09-12T23:59:14.759097848Z" level=info msg="TearDown network for sandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" successfully" Sep 12 23:59:14.767425 containerd[1466]: time="2025-09-12T23:59:14.767197393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:14.767425 containerd[1466]: time="2025-09-12T23:59:14.767292753Z" level=info msg="RemovePodSandbox \"3d84112682607df10cbea40764147a42b69ce79b1b89844cdb8dd39586d249a1\" returns successfully" Sep 12 23:59:14.768436 containerd[1466]: time="2025-09-12T23:59:14.768146080Z" level=info msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.817 [WARNING][5642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ea8dd6bf-8417-4740-90cb-3ee84da12ecc", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb", Pod:"goldmane-54d579b49d-vfdq2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0df6ca3983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.818 [INFO][5642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.818 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" iface="eth0" netns="" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.818 [INFO][5642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.818 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.841 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.842 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.842 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.855 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.855 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.857 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.860992 containerd[1466]: 2025-09-12 23:59:14.858 [INFO][5642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.860992 containerd[1466]: time="2025-09-12T23:59:14.860758418Z" level=info msg="TearDown network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" successfully" Sep 12 23:59:14.860992 containerd[1466]: time="2025-09-12T23:59:14.860791778Z" level=info msg="StopPodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" returns successfully" Sep 12 23:59:14.862497 containerd[1466]: time="2025-09-12T23:59:14.862046348Z" level=info msg="RemovePodSandbox for \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" Sep 12 23:59:14.862497 containerd[1466]: time="2025-09-12T23:59:14.862131349Z" level=info msg="Forcibly stopping sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\"" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.902 [WARNING][5663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ea8dd6bf-8417-4740-90cb-3ee84da12ecc", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-f526684106", ContainerID:"14558ed8f462ec8ff6341eef68ce4be3051812fda51308d054f57d03ee431ffb", Pod:"goldmane-54d579b49d-vfdq2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0df6ca3983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.903 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.903 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" iface="eth0" netns="" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.903 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.904 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.932 [INFO][5670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.932 [INFO][5670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.932 [INFO][5670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.950 [WARNING][5670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.950 [INFO][5670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" HandleID="k8s-pod-network.adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Workload="ci--4081--3--5--n--f526684106-k8s-goldmane--54d579b49d--vfdq2-eth0" Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.952 [INFO][5670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:59:14.957651 containerd[1466]: 2025-09-12 23:59:14.954 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e" Sep 12 23:59:14.957651 containerd[1466]: time="2025-09-12T23:59:14.956225618Z" level=info msg="TearDown network for sandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" successfully" Sep 12 23:59:14.960297 containerd[1466]: time="2025-09-12T23:59:14.960217930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:59:14.960428 containerd[1466]: time="2025-09-12T23:59:14.960336051Z" level=info msg="RemovePodSandbox \"adf97ca50b870a7901f492571bd4aab10b17735946b9f2020c5872cade876e4e\" returns successfully" Sep 13 00:00:01.579613 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 13 00:00:01.612298 systemd[1]: logrotate.service: Deactivated successfully. Sep 13 00:00:31.577297 systemd[1]: run-containerd-runc-k8s.io-e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff-runc.zqQBJS.mount: Deactivated successfully. Sep 13 00:00:35.413253 systemd[1]: run-containerd-runc-k8s.io-e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff-runc.UwroaS.mount: Deactivated successfully. Sep 13 00:00:50.853165 systemd[1]: Started sshd@7-128.140.85.90:22-147.75.109.163:35296.service - OpenSSH per-connection server daemon (147.75.109.163:35296). Sep 13 00:00:51.851327 sshd[5997]: Accepted publickey for core from 147.75.109.163 port 35296 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:51.854437 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:51.860271 systemd-logind[1456]: New session 8 of user core. Sep 13 00:00:51.865953 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:00:52.627708 sshd[5997]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:52.633421 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:00:52.634478 systemd[1]: sshd@7-128.140.85.90:22-147.75.109.163:35296.service: Deactivated successfully. Sep 13 00:00:52.637373 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:00:52.639019 systemd-logind[1456]: Removed session 8. Sep 13 00:00:57.824036 systemd[1]: Started sshd@8-128.140.85.90:22-147.75.109.163:35306.service - OpenSSH per-connection server daemon (147.75.109.163:35306). Sep 13 00:00:58.882662 sshd[6038]: Accepted publickey for core from 147.75.109.163 port 35306 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:58.884949 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:58.891097 systemd-logind[1456]: New session 9 of user core. Sep 13 00:00:58.894871 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:00:59.689477 sshd[6038]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:59.694454 systemd[1]: sshd@8-128.140.85.90:22-147.75.109.163:35306.service: Deactivated successfully. Sep 13 00:00:59.701560 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:00:59.710062 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:00:59.712250 systemd-logind[1456]: Removed session 9. Sep 13 00:01:01.580519 systemd[1]: run-containerd-runc-k8s.io-e473a1c1e83c53d07badd1b11349eaf3e60cab265e6f625daab843b1fb9c86ff-runc.T4leQy.mount: Deactivated successfully. Sep 13 00:01:04.866914 systemd[1]: Started sshd@9-128.140.85.90:22-147.75.109.163:45508.service - OpenSSH per-connection server daemon (147.75.109.163:45508). Sep 13 00:01:05.872912 sshd[6093]: Accepted publickey for core from 147.75.109.163 port 45508 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:05.876469 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:05.881994 systemd-logind[1456]: New session 10 of user core. Sep 13 00:01:05.891239 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:01:06.640505 sshd[6093]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:06.647413 systemd[1]: sshd@9-128.140.85.90:22-147.75.109.163:45508.service: Deactivated successfully. Sep 13 00:01:06.650632 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:01:06.651882 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:01:06.653529 systemd-logind[1456]: Removed session 10. Sep 13 00:01:11.816178 systemd[1]: Started sshd@10-128.140.85.90:22-147.75.109.163:37806.service - OpenSSH per-connection server daemon (147.75.109.163:37806). Sep 13 00:01:12.816169 sshd[6107]: Accepted publickey for core from 147.75.109.163 port 37806 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:12.818428 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:12.824961 systemd-logind[1456]: New session 11 of user core. Sep 13 00:01:12.829050 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:01:13.592389 sshd[6107]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:13.596709 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:01:13.598626 systemd[1]: sshd@10-128.140.85.90:22-147.75.109.163:37806.service: Deactivated successfully. Sep 13 00:01:13.601145 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:01:13.602552 systemd-logind[1456]: Removed session 11. Sep 13 00:01:18.771098 systemd[1]: Started sshd@11-128.140.85.90:22-147.75.109.163:37820.service - OpenSSH per-connection server daemon (147.75.109.163:37820). Sep 13 00:01:19.767736 sshd[6123]: Accepted publickey for core from 147.75.109.163 port 37820 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:19.769852 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:19.777875 systemd-logind[1456]: New session 12 of user core. Sep 13 00:01:19.784906 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:01:20.536571 sshd[6123]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:20.541536 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:01:20.542199 systemd[1]: sshd@11-128.140.85.90:22-147.75.109.163:37820.service: Deactivated successfully. Sep 13 00:01:20.545434 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:01:20.549369 systemd-logind[1456]: Removed session 12. Sep 13 00:01:20.706541 systemd[1]: Started sshd@12-128.140.85.90:22-147.75.109.163:53916.service - OpenSSH per-connection server daemon (147.75.109.163:53916). Sep 13 00:01:21.682847 sshd[6136]: Accepted publickey for core from 147.75.109.163 port 53916 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:21.685291 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:21.693031 systemd-logind[1456]: New session 13 of user core. Sep 13 00:01:21.698886 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:01:22.483479 sshd[6136]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:22.489348 systemd[1]: sshd@12-128.140.85.90:22-147.75.109.163:53916.service: Deactivated successfully. Sep 13 00:01:22.494542 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:01:22.501707 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:01:22.508826 systemd-logind[1456]: Removed session 13. Sep 13 00:01:22.669056 systemd[1]: Started sshd@13-128.140.85.90:22-147.75.109.163:53928.service - OpenSSH per-connection server daemon (147.75.109.163:53928). Sep 13 00:01:23.672360 sshd[6170]: Accepted publickey for core from 147.75.109.163 port 53928 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:23.674960 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:23.681588 systemd-logind[1456]: New session 14 of user core. Sep 13 00:01:23.687873 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:01:24.461989 sshd[6170]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:24.466538 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:01:24.467273 systemd[1]: sshd@13-128.140.85.90:22-147.75.109.163:53928.service: Deactivated successfully. Sep 13 00:01:24.471468 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:01:24.479109 systemd-logind[1456]: Removed session 14. Sep 13 00:01:29.643455 systemd[1]: Started sshd@14-128.140.85.90:22-147.75.109.163:53942.service - OpenSSH per-connection server daemon (147.75.109.163:53942). Sep 13 00:01:30.630472 sshd[6188]: Accepted publickey for core from 147.75.109.163 port 53942 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:30.632594 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:30.638982 systemd-logind[1456]: New session 15 of user core. Sep 13 00:01:30.644889 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:01:31.402431 sshd[6188]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:31.409036 systemd[1]: sshd@14-128.140.85.90:22-147.75.109.163:53942.service: Deactivated successfully. Sep 13 00:01:31.413115 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:01:31.415753 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:01:31.418328 systemd-logind[1456]: Removed session 15. Sep 13 00:01:36.570122 systemd[1]: Started sshd@15-128.140.85.90:22-147.75.109.163:52088.service - OpenSSH per-connection server daemon (147.75.109.163:52088). Sep 13 00:01:37.544651 sshd[6265]: Accepted publickey for core from 147.75.109.163 port 52088 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:37.547214 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:37.552516 systemd-logind[1456]: New session 16 of user core. Sep 13 00:01:37.558802 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:01:38.296973 sshd[6265]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:38.303843 systemd[1]: sshd@15-128.140.85.90:22-147.75.109.163:52088.service: Deactivated successfully. Sep 13 00:01:38.307085 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:01:38.308897 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:01:38.310857 systemd-logind[1456]: Removed session 16. Sep 13 00:01:43.476121 systemd[1]: Started sshd@16-128.140.85.90:22-147.75.109.163:49244.service - OpenSSH per-connection server daemon (147.75.109.163:49244). Sep 13 00:01:44.453998 sshd[6280]: Accepted publickey for core from 147.75.109.163 port 49244 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:44.456305 sshd[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:44.463249 systemd-logind[1456]: New session 17 of user core. Sep 13 00:01:44.467916 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:01:45.210237 sshd[6280]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:45.215090 systemd[1]: sshd@16-128.140.85.90:22-147.75.109.163:49244.service: Deactivated successfully. Sep 13 00:01:45.219138 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:01:45.220994 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:01:45.222418 systemd-logind[1456]: Removed session 17. Sep 13 00:01:45.387947 systemd[1]: Started sshd@17-128.140.85.90:22-147.75.109.163:49254.service - OpenSSH per-connection server daemon (147.75.109.163:49254). Sep 13 00:01:46.386571 sshd[6312]: Accepted publickey for core from 147.75.109.163 port 49254 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:46.388883 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:46.394541 systemd-logind[1456]: New session 18 of user core. Sep 13 00:01:46.400910 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:01:47.315776 sshd[6312]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:47.320229 systemd[1]: sshd@17-128.140.85.90:22-147.75.109.163:49254.service: Deactivated successfully. Sep 13 00:01:47.324762 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:01:47.327160 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:01:47.328802 systemd-logind[1456]: Removed session 18. Sep 13 00:01:47.487041 systemd[1]: Started sshd@18-128.140.85.90:22-147.75.109.163:49266.service - OpenSSH per-connection server daemon (147.75.109.163:49266). Sep 13 00:01:48.477563 sshd[6323]: Accepted publickey for core from 147.75.109.163 port 49266 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:48.480808 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:48.491440 systemd-logind[1456]: New session 19 of user core. Sep 13 00:01:48.503941 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:01:49.850334 sshd[6323]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:49.855981 systemd[1]: sshd@18-128.140.85.90:22-147.75.109.163:49266.service: Deactivated successfully. Sep 13 00:01:49.859475 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:01:49.864259 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:01:49.866953 systemd-logind[1456]: Removed session 19. Sep 13 00:01:50.022046 systemd[1]: Started sshd@19-128.140.85.90:22-147.75.109.163:49282.service - OpenSSH per-connection server daemon (147.75.109.163:49282). Sep 13 00:01:50.995404 sshd[6341]: Accepted publickey for core from 147.75.109.163 port 49282 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:50.998100 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:51.005154 systemd-logind[1456]: New session 20 of user core. Sep 13 00:01:51.018104 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:01:51.904504 sshd[6341]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:51.911259 systemd[1]: sshd@19-128.140.85.90:22-147.75.109.163:49282.service: Deactivated successfully. Sep 13 00:01:51.915373 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:01:51.918477 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:01:51.920033 systemd-logind[1456]: Removed session 20. Sep 13 00:01:52.077048 systemd[1]: Started sshd@20-128.140.85.90:22-147.75.109.163:52000.service - OpenSSH per-connection server daemon (147.75.109.163:52000). Sep 13 00:01:52.493741 systemd[1]: run-containerd-runc-k8s.io-8533f0084c3a10c182eda34a2e5bc61b3c5c8613b0939556220b1d67edf31454-runc.4B1F7z.mount: Deactivated successfully. Sep 13 00:01:53.049121 sshd[6354]: Accepted publickey for core from 147.75.109.163 port 52000 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:53.050984 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:53.056621 systemd-logind[1456]: New session 21 of user core. Sep 13 00:01:53.057837 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:01:53.815563 sshd[6354]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:53.821367 systemd[1]: sshd@20-128.140.85.90:22-147.75.109.163:52000.service: Deactivated successfully. Sep 13 00:01:53.824476 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:01:53.826430 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:01:53.828857 systemd-logind[1456]: Removed session 21. Sep 13 00:01:58.992994 systemd[1]: Started sshd@21-128.140.85.90:22-147.75.109.163:52016.service - OpenSSH per-connection server daemon (147.75.109.163:52016). Sep 13 00:01:59.978844 sshd[6412]: Accepted publickey for core from 147.75.109.163 port 52016 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:59.981445 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:59.992134 systemd-logind[1456]: New session 22 of user core. Sep 13 00:01:59.996940 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:02:00.747077 sshd[6412]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:00.752735 systemd[1]: sshd@21-128.140.85.90:22-147.75.109.163:52016.service: Deactivated successfully. Sep 13 00:02:00.754819 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:02:00.760596 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:02:00.767646 systemd-logind[1456]: Removed session 22. Sep 13 00:02:04.596458 systemd[1]: run-containerd-runc-k8s.io-cf81b7a91c5bb98cb8c20c02ff435989a0467f545f0528978c3c7951ae8463fd-runc.In2vRO.mount: Deactivated successfully. Sep 13 00:02:05.933027 systemd[1]: Started sshd@22-128.140.85.90:22-147.75.109.163:42408.service - OpenSSH per-connection server daemon (147.75.109.163:42408). Sep 13 00:02:06.946722 sshd[6469]: Accepted publickey for core from 147.75.109.163 port 42408 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:06.948390 sshd[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:06.955743 systemd-logind[1456]: New session 23 of user core. Sep 13 00:02:06.961833 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:02:07.734171 sshd[6469]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:07.742270 systemd[1]: sshd@22-128.140.85.90:22-147.75.109.163:42408.service: Deactivated successfully. Sep 13 00:02:07.746261 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:02:07.747569 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:02:07.751845 systemd-logind[1456]: Removed session 23. Sep 13 00:02:23.169798 systemd[1]: cri-containerd-ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc.scope: Deactivated successfully. Sep 13 00:02:23.170120 systemd[1]: cri-containerd-ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc.scope: Consumed 24.879s CPU time. Sep 13 00:02:23.195364 containerd[1466]: time="2025-09-13T00:02:23.195294666Z" level=info msg="shim disconnected" id=ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc namespace=k8s.io Sep 13 00:02:23.195364 containerd[1466]: time="2025-09-13T00:02:23.195359626Z" level=warning msg="cleaning up after shim disconnected" id=ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc namespace=k8s.io Sep 13 00:02:23.195364 containerd[1466]: time="2025-09-13T00:02:23.195370226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:23.196786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc-rootfs.mount: Deactivated successfully. Sep 13 00:02:23.616957 kubelet[2604]: E0913 00:02:23.616217 2604 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37376->10.0.0.2:2379: read: connection timed out" Sep 13 00:02:23.940978 systemd[1]: cri-containerd-1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3.scope: Deactivated successfully. Sep 13 00:02:23.941298 systemd[1]: cri-containerd-1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3.scope: Consumed 6.689s CPU time, 18.0M memory peak, 0B memory swap peak. Sep 13 00:02:23.972659 containerd[1466]: time="2025-09-13T00:02:23.971215221Z" level=info msg="shim disconnected" id=1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3 namespace=k8s.io Sep 13 00:02:23.972659 containerd[1466]: time="2025-09-13T00:02:23.971273381Z" level=warning msg="cleaning up after shim disconnected" id=1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3 namespace=k8s.io Sep 13 00:02:23.972659 containerd[1466]: time="2025-09-13T00:02:23.971286221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:23.975806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3-rootfs.mount: Deactivated successfully. Sep 13 00:02:24.232366 kubelet[2604]: I0913 00:02:24.232292 2604 scope.go:117] "RemoveContainer" containerID="1fde2f4e04a0f53de9f848ad08479866580f8f68b8ccdc7f0ed0006605e623d3" Sep 13 00:02:24.234771 kubelet[2604]: I0913 00:02:24.232695 2604 scope.go:117] "RemoveContainer" containerID="ae0c156d66e8e8504c52347ec1ad415d85fa8471cc80f3566dcbdea3e2cf7ecc" Sep 13 00:02:24.234944 containerd[1466]: time="2025-09-13T00:02:24.234878538Z" level=info msg="CreateContainer within sandbox \"d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:02:24.237761 containerd[1466]: time="2025-09-13T00:02:24.237711209Z" level=info msg="CreateContainer within sandbox \"94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:02:24.261273 containerd[1466]: time="2025-09-13T00:02:24.260835938Z" level=info msg="CreateContainer within sandbox \"d92bf5c14ab0f94671945f6fc3a576219c36c9dbee5fb6d70b608a9f77c650ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"362f51cfb005997feb768b131558d926c9c165a47d7e20dc8ee3a611e0842737\"" Sep 13 00:02:24.265140 containerd[1466]: time="2025-09-13T00:02:24.265097725Z" level=info msg="StartContainer for \"362f51cfb005997feb768b131558d926c9c165a47d7e20dc8ee3a611e0842737\"" Sep 13 00:02:24.268009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975887016.mount: Deactivated successfully. Sep 13 00:02:24.274909 containerd[1466]: time="2025-09-13T00:02:24.274862936Z" level=info msg="CreateContainer within sandbox \"94518b888dc280d8376e4119590a2abd26e041cfab925ca0c6b36d195f4aedfe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e4a20c848da88cf07d1e7288614e1b0cd74e4c62568b05e7f3ae386e87fe6816\"" Sep 13 00:02:24.276032 containerd[1466]: time="2025-09-13T00:02:24.276004052Z" level=info msg="StartContainer for \"e4a20c848da88cf07d1e7288614e1b0cd74e4c62568b05e7f3ae386e87fe6816\"" Sep 13 00:02:24.317966 systemd[1]: Started cri-containerd-362f51cfb005997feb768b131558d926c9c165a47d7e20dc8ee3a611e0842737.scope - libcontainer container 362f51cfb005997feb768b131558d926c9c165a47d7e20dc8ee3a611e0842737. Sep 13 00:02:24.319711 systemd[1]: Started cri-containerd-e4a20c848da88cf07d1e7288614e1b0cd74e4c62568b05e7f3ae386e87fe6816.scope - libcontainer container e4a20c848da88cf07d1e7288614e1b0cd74e4c62568b05e7f3ae386e87fe6816. Sep 13 00:02:24.361594 containerd[1466]: time="2025-09-13T00:02:24.361061833Z" level=info msg="StartContainer for \"362f51cfb005997feb768b131558d926c9c165a47d7e20dc8ee3a611e0842737\" returns successfully" Sep 13 00:02:24.377299 containerd[1466]: time="2025-09-13T00:02:24.377229664Z" level=info msg="StartContainer for \"e4a20c848da88cf07d1e7288614e1b0cd74e4c62568b05e7f3ae386e87fe6816\" returns successfully" Sep 13 00:02:27.713862 kubelet[2604]: E0913 00:02:27.711195 2604 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40672->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-f526684106.1864ae9b7307e00b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-f526684106,UID:d51acb427c8724bde6024fb2c915f2fd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f526684106,},FirstTimestamp:2025-09-13 00:02:17.254395915 +0000 UTC m=+244.246082164,LastTimestamp:2025-09-13 00:02:17.254395915 +0000 UTC m=+244.246082164,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f526684106,}" Sep 13 00:02:28.689797 systemd[1]: cri-containerd-dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad.scope: Deactivated successfully. Sep 13 00:02:28.690099 systemd[1]: cri-containerd-dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad.scope: Consumed 4.501s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 13 00:02:28.716216 containerd[1466]: time="2025-09-13T00:02:28.716149308Z" level=info msg="shim disconnected" id=dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad namespace=k8s.io Sep 13 00:02:28.716216 containerd[1466]: time="2025-09-13T00:02:28.716208868Z" level=warning msg="cleaning up after shim disconnected" id=dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad namespace=k8s.io Sep 13 00:02:28.716216 containerd[1466]: time="2025-09-13T00:02:28.716218468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:28.718423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad-rootfs.mount: Deactivated successfully. Sep 13 00:02:29.246568 kubelet[2604]: I0913 00:02:29.246439 2604 scope.go:117] "RemoveContainer" containerID="dd2d89aad9b8eaa19f77b086d8a5b853163863d3ac0cdb5f97199e90f2ae3cad" Sep 13 00:02:29.249229 containerd[1466]: time="2025-09-13T00:02:29.249034802Z" level=info msg="CreateContainer within sandbox \"49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:02:29.272960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089555134.mount: Deactivated successfully. Sep 13 00:02:29.274556 containerd[1466]: time="2025-09-13T00:02:29.273657367Z" level=info msg="CreateContainer within sandbox \"49b890a02a54f1897d123cafab807bd37319190950db46a28b635ef1f44d43e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"17c706290b4ca5c90c39362d5e32315c114709a35e2f229c4c9a0b08c9d00957\"" Sep 13 00:02:29.274556 containerd[1466]: time="2025-09-13T00:02:29.274300045Z" level=info msg="StartContainer for \"17c706290b4ca5c90c39362d5e32315c114709a35e2f229c4c9a0b08c9d00957\"" Sep 13 00:02:29.312864 systemd[1]: Started cri-containerd-17c706290b4ca5c90c39362d5e32315c114709a35e2f229c4c9a0b08c9d00957.scope - libcontainer container 17c706290b4ca5c90c39362d5e32315c114709a35e2f229c4c9a0b08c9d00957. Sep 13 00:02:29.354211 containerd[1466]: time="2025-09-13T00:02:29.354092321Z" level=info msg="StartContainer for \"17c706290b4ca5c90c39362d5e32315c114709a35e2f229c4c9a0b08c9d00957\" returns successfully"