Aug 13 00:41:06.889438 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:41:06.889726 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:41:06.889740 kernel: KASLR enabled Aug 13 00:41:06.889747 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Aug 13 00:41:06.889753 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Aug 13 00:41:06.889760 kernel: random: crng init done Aug 13 00:41:06.889768 kernel: ACPI: Early table checksum verification disabled Aug 13 00:41:06.889775 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Aug 13 00:41:06.889782 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:41:06.889790 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889797 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889804 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889811 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889818 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889826 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889835 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889843 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889850 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:41:06.889857 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:41:06.889864 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Aug 13 00:41:06.889871 kernel: NUMA: Failed to initialise from firmware Aug 13 00:41:06.889879 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:41:06.889899 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Aug 13 00:41:06.889906 kernel: Zone ranges: Aug 13 00:41:06.889913 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:41:06.889923 kernel: DMA32 empty Aug 13 00:41:06.889930 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Aug 13 00:41:06.889937 kernel: Movable zone start for each node Aug 13 00:41:06.889944 kernel: Early memory node ranges Aug 13 00:41:06.889952 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Aug 13 00:41:06.889959 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Aug 13 00:41:06.889966 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Aug 13 00:41:06.889973 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Aug 13 00:41:06.889981 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Aug 13 00:41:06.889988 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Aug 13 00:41:06.889995 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Aug 13 00:41:06.890002 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:41:06.890011 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Aug 13 00:41:06.890018 kernel: psci: probing for conduit method from ACPI. Aug 13 00:41:06.890025 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:41:06.890036 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:41:06.890043 kernel: psci: Trusted OS migration not required Aug 13 00:41:06.890051 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:41:06.890060 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:41:06.890068 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:41:06.890076 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:41:06.890084 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:41:06.890091 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:41:06.890099 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:41:06.890107 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:41:06.890114 kernel: CPU features: detected: Spectre-v4 Aug 13 00:41:06.890122 kernel: CPU features: detected: Spectre-BHB Aug 13 00:41:06.890130 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:41:06.890139 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:41:06.890146 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:41:06.890154 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:41:06.890161 kernel: alternatives: applying boot alternatives Aug 13 00:41:06.890171 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:41:06.890179 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:41:06.890187 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:41:06.890195 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:41:06.890202 kernel: Fallback order for Node 0: 0 Aug 13 00:41:06.890210 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Aug 13 00:41:06.890218 kernel: Policy zone: Normal Aug 13 00:41:06.890227 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:41:06.890234 kernel: software IO TLB: area num 2. Aug 13 00:41:06.890242 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Aug 13 00:41:06.890250 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Aug 13 00:41:06.890258 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:41:06.890266 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:41:06.890274 kernel: rcu: RCU event tracing is enabled. Aug 13 00:41:06.890282 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:41:06.890289 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:41:06.890297 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:41:06.890305 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:41:06.890314 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:41:06.890322 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:41:06.890329 kernel: GICv3: 256 SPIs implemented Aug 13 00:41:06.890337 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:41:06.890344 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:41:06.890352 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:41:06.890359 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:41:06.890367 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:41:06.890375 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:41:06.890383 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:41:06.890390 kernel: GICv3: using LPI property table @0x00000001000e0000 Aug 13 00:41:06.890398 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Aug 13 00:41:06.890407 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:41:06.890415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:41:06.890423 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:41:06.890431 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:41:06.890438 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:41:06.890446 kernel: Console: colour dummy device 80x25 Aug 13 00:41:06.890466 kernel: ACPI: Core revision 20230628 Aug 13 00:41:06.890475 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:41:06.890482 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:41:06.890490 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:41:06.890500 kernel: landlock: Up and running. Aug 13 00:41:06.890508 kernel: SELinux: Initializing. Aug 13 00:41:06.890516 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:41:06.890524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:41:06.890532 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:41:06.890540 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:41:06.890548 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:41:06.890556 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:41:06.890563 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:41:06.890573 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:41:06.890581 kernel: Remapping and enabling EFI services. Aug 13 00:41:06.890588 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:41:06.890596 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:41:06.890604 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:41:06.890612 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Aug 13 00:41:06.890620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:41:06.890628 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:41:06.890636 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:41:06.890643 kernel: SMP: Total of 2 processors activated. Aug 13 00:41:06.890653 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:41:06.890684 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:41:06.890700 kernel: CPU features: detected: Common not Private translations Aug 13 00:41:06.890711 kernel: CPU features: detected: CRC32 instructions Aug 13 00:41:06.890719 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:41:06.890728 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:41:06.890736 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:41:06.890744 kernel: CPU features: detected: Privileged Access Never Aug 13 00:41:06.890752 kernel: CPU features: detected: RAS Extension Support Aug 13 00:41:06.890762 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:41:06.890771 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:41:06.890779 kernel: alternatives: applying system-wide alternatives Aug 13 00:41:06.890787 kernel: devtmpfs: initialized Aug 13 00:41:06.890796 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:41:06.890804 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:41:06.890812 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:41:06.890822 kernel: SMBIOS 3.0.0 present. Aug 13 00:41:06.890830 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Aug 13 00:41:06.890839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:41:06.890847 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:41:06.890855 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:41:06.890864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:41:06.890873 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:41:06.890881 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Aug 13 00:41:06.890897 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:41:06.890908 kernel: cpuidle: using governor menu Aug 13 00:41:06.890917 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:41:06.890925 kernel: ASID allocator initialised with 32768 entries Aug 13 00:41:06.890933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:41:06.890942 kernel: Serial: AMBA PL011 UART driver Aug 13 00:41:06.890950 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:41:06.890959 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:41:06.890967 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:41:06.890975 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:41:06.890985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:41:06.890994 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:41:06.891002 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:41:06.891010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:41:06.891019 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:41:06.891027 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:41:06.891035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:41:06.891044 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:41:06.891052 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:41:06.891062 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:41:06.891071 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:41:06.891080 kernel: ACPI: Interpreter enabled Aug 13 00:41:06.891088 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:41:06.891096 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:41:06.891104 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:41:06.891113 kernel: printk: console [ttyAMA0] enabled Aug 13 00:41:06.891121 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:41:06.891278 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:41:06.891377 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:41:06.891495 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:41:06.891583 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:41:06.891658 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:41:06.891669 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:41:06.891678 kernel: PCI host bridge to bus 0000:00 Aug 13 00:41:06.891758 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:41:06.893587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:41:06.893661 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:41:06.893721 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:41:06.893814 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:41:06.893914 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Aug 13 00:41:06.893994 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Aug 13 00:41:06.894069 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:41:06.894144 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.894212 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Aug 13 00:41:06.894300 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.894384 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Aug 13 00:41:06.894505 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.894577 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Aug 13 00:41:06.894655 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.894720 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Aug 13 00:41:06.894795 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.894871 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Aug 13 00:41:06.894994 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.895071 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Aug 13 00:41:06.895151 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.895219 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Aug 13 00:41:06.895291 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.895359 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Aug 13 00:41:06.895432 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:41:06.896675 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Aug 13 00:41:06.896774 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Aug 13 00:41:06.896841 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Aug 13 00:41:06.896973 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:41:06.897046 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Aug 13 00:41:06.897115 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:41:06.897182 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:41:06.897264 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 00:41:06.897330 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Aug 13 00:41:06.897404 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Aug 13 00:41:06.897487 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Aug 13 00:41:06.897558 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Aug 13 00:41:06.897634 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Aug 13 00:41:06.897702 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Aug 13 00:41:06.897787 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 00:41:06.897856 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Aug 13 00:41:06.897950 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Aug 13 00:41:06.898021 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Aug 13 00:41:06.898089 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:41:06.898164 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:41:06.898236 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Aug 13 00:41:06.898304 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Aug 13 00:41:06.898370 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:41:06.898440 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Aug 13 00:41:06.901586 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:41:06.901665 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:41:06.901743 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Aug 13 00:41:06.901811 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Aug 13 00:41:06.901880 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Aug 13 00:41:06.901979 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 00:41:06.902047 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:41:06.902114 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:41:06.902184 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 00:41:06.902250 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Aug 13 00:41:06.902320 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Aug 13 00:41:06.902390 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 00:41:06.902479 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Aug 13 00:41:06.902550 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Aug 13 00:41:06.902621 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 00:41:06.902687 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:41:06.902752 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:41:06.902826 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 00:41:06.902927 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:41:06.903001 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:41:06.903072 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 00:41:06.903138 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:41:06.903203 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:41:06.903273 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 00:41:06.903338 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:41:06.903408 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:41:06.904628 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Aug 13 00:41:06.904710 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:41:06.904792 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Aug 13 00:41:06.904875 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:41:06.905005 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Aug 13 00:41:06.905073 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:41:06.905146 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Aug 13 00:41:06.905211 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:41:06.905278 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Aug 13 00:41:06.905342 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:41:06.905410 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Aug 13 00:41:06.905493 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:41:06.905570 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Aug 13 00:41:06.905636 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:41:06.905705 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Aug 13 00:41:06.905771 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:41:06.905839 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Aug 13 00:41:06.905922 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:41:06.905998 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Aug 13 00:41:06.906069 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Aug 13 00:41:06.906138 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Aug 13 00:41:06.906203 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 00:41:06.906269 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Aug 13 00:41:06.906335 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 00:41:06.906401 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Aug 13 00:41:06.907354 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 00:41:06.907476 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Aug 13 00:41:06.907573 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 00:41:06.907680 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Aug 13 00:41:06.907753 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 00:41:06.909992 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Aug 13 00:41:06.910076 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 00:41:06.910189 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Aug 13 00:41:06.910565 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 00:41:06.910653 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Aug 13 00:41:06.910733 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 00:41:06.910809 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Aug 13 00:41:06.910895 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Aug 13 00:41:06.910986 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Aug 13 00:41:06.911070 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Aug 13 00:41:06.911147 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:41:06.911224 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Aug 13 00:41:06.911299 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Aug 13 00:41:06.911377 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 00:41:06.911486 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Aug 13 00:41:06.911566 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:41:06.911651 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Aug 13 00:41:06.911731 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Aug 13 00:41:06.911814 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 00:41:06.911936 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Aug 13 00:41:06.912027 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:41:06.912113 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:41:06.912191 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Aug 13 00:41:06.912268 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Aug 13 00:41:06.912339 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 00:41:06.912444 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Aug 13 00:41:06.913435 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:41:06.913634 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:41:06.913713 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Aug 13 00:41:06.913779 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 00:41:06.913845 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Aug 13 00:41:06.913930 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:41:06.914007 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Aug 13 00:41:06.914080 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Aug 13 00:41:06.914156 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 00:41:06.914222 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Aug 13 00:41:06.914293 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:41:06.914368 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Aug 13 00:41:06.914439 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Aug 13 00:41:06.914521 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Aug 13 00:41:06.914597 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 00:41:06.914669 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Aug 13 00:41:06.914733 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:41:06.914805 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Aug 13 00:41:06.914872 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Aug 13 00:41:06.914984 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Aug 13 00:41:06.915057 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Aug 13 00:41:06.915127 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 00:41:06.915191 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Aug 13 00:41:06.915259 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:41:06.915327 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Aug 13 00:41:06.915392 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 00:41:06.915487 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Aug 13 00:41:06.915559 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:41:06.915627 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Aug 13 00:41:06.915692 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Aug 13 00:41:06.915756 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Aug 13 00:41:06.915825 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:41:06.915903 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:41:06.915969 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:41:06.916029 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:41:06.916099 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 00:41:06.916160 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Aug 13 00:41:06.916221 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:41:06.916298 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Aug 13 00:41:06.916358 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Aug 13 00:41:06.916418 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:41:06.916503 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Aug 13 00:41:06.916567 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Aug 13 00:41:06.916628 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:41:06.916698 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 13 00:41:06.916759 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Aug 13 00:41:06.916822 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:41:06.916934 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Aug 13 00:41:06.917006 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Aug 13 00:41:06.917065 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:41:06.917137 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Aug 13 00:41:06.917201 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Aug 13 00:41:06.917261 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:41:06.917329 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Aug 13 00:41:06.917390 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Aug 13 00:41:06.917553 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:41:06.917632 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Aug 13 00:41:06.917692 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Aug 13 00:41:06.917751 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:41:06.917818 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Aug 13 00:41:06.917878 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Aug 13 00:41:06.917956 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:41:06.917971 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:41:06.917979 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:41:06.917987 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:41:06.917995 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:41:06.918004 kernel: iommu: Default domain type: Translated Aug 13 00:41:06.918012 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:41:06.918020 kernel: efivars: Registered efivars operations Aug 13 00:41:06.918028 kernel: vgaarb: loaded Aug 13 00:41:06.918035 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:41:06.918045 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:41:06.918053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:41:06.918060 kernel: pnp: PnP ACPI init Aug 13 00:41:06.918134 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:41:06.918145 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:41:06.918153 kernel: NET: Registered PF_INET protocol family Aug 13 00:41:06.918161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:41:06.918169 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:41:06.918178 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:41:06.918187 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:41:06.918195 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:41:06.918202 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:41:06.918210 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:41:06.918218 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:41:06.918226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:41:06.918298 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Aug 13 00:41:06.918309 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:41:06.918319 kernel: kvm [1]: HYP mode not available Aug 13 00:41:06.918327 kernel: Initialise system trusted keyrings Aug 13 00:41:06.918335 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:41:06.918342 kernel: Key type asymmetric registered Aug 13 00:41:06.918350 kernel: Asymmetric key parser 'x509' registered Aug 13 00:41:06.918357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:41:06.918365 kernel: io scheduler mq-deadline registered Aug 13 00:41:06.918373 kernel: io scheduler kyber registered Aug 13 00:41:06.918380 kernel: io scheduler bfq registered Aug 13 00:41:06.918390 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:41:06.918469 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Aug 13 00:41:06.918539 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Aug 13 00:41:06.918607 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.918673 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Aug 13 00:41:06.918739 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Aug 13 00:41:06.918806 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.918892 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Aug 13 00:41:06.918966 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Aug 13 00:41:06.919032 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.919099 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Aug 13 00:41:06.919164 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Aug 13 00:41:06.919230 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.919300 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Aug 13 00:41:06.919366 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Aug 13 00:41:06.919430 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.919549 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Aug 13 00:41:06.919617 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Aug 13 00:41:06.919687 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.919753 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Aug 13 00:41:06.919817 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Aug 13 00:41:06.919881 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.919986 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Aug 13 00:41:06.920053 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Aug 13 00:41:06.920123 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.920133 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Aug 13 00:41:06.920197 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Aug 13 00:41:06.920262 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Aug 13 00:41:06.920327 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:41:06.920337 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:41:06.920345 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:41:06.920353 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:41:06.920425 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Aug 13 00:41:06.920510 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Aug 13 00:41:06.920523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:41:06.920531 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:41:06.920602 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Aug 13 00:41:06.920613 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Aug 13 00:41:06.920621 kernel: thunder_xcv, ver 1.0 Aug 13 00:41:06.920629 kernel: thunder_bgx, ver 1.0 Aug 13 00:41:06.920639 kernel: nicpf, ver 1.0 Aug 13 00:41:06.920646 kernel: nicvf, ver 1.0 Aug 13 00:41:06.920724 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:41:06.920786 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:41:06 UTC (1755045666) Aug 13 00:41:06.920797 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:41:06.920805 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:41:06.920813 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:41:06.920821 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:41:06.920831 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:41:06.920838 kernel: Segment Routing with IPv6 Aug 13 00:41:06.920846 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:41:06.920854 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:41:06.920862 kernel: Key type dns_resolver registered Aug 13 00:41:06.920869 kernel: registered taskstats version 1 Aug 13 00:41:06.920877 kernel: Loading compiled-in X.509 certificates Aug 13 00:41:06.920895 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:41:06.920903 kernel: Key type .fscrypt registered Aug 13 00:41:06.920911 kernel: Key type fscrypt-provisioning registered Aug 13 00:41:06.920921 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:41:06.920929 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:41:06.920936 kernel: ima: No architecture policies found Aug 13 00:41:06.920944 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:41:06.920952 kernel: clk: Disabling unused clocks Aug 13 00:41:06.920959 kernel: Freeing unused kernel memory: 39424K Aug 13 00:41:06.920967 kernel: Run /init as init process Aug 13 00:41:06.920975 kernel: with arguments: Aug 13 00:41:06.920984 kernel: /init Aug 13 00:41:06.920992 kernel: with environment: Aug 13 00:41:06.920999 kernel: HOME=/ Aug 13 00:41:06.921007 kernel: TERM=linux Aug 13 00:41:06.921014 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:41:06.921024 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:41:06.921034 systemd[1]: Detected virtualization kvm. Aug 13 00:41:06.921043 systemd[1]: Detected architecture arm64. Aug 13 00:41:06.921052 systemd[1]: Running in initrd. Aug 13 00:41:06.921061 systemd[1]: No hostname configured, using default hostname. Aug 13 00:41:06.921069 systemd[1]: Hostname set to . Aug 13 00:41:06.921077 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:41:06.921085 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:41:06.921093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:06.921102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:06.921110 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:41:06.921121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:41:06.921129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:41:06.921137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:41:06.921147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:41:06.921159 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:41:06.921169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:06.921179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:06.921190 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:41:06.921198 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:41:06.921206 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:41:06.921215 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:41:06.921223 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:41:06.921231 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:41:06.921241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:41:06.921249 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:41:06.921259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:06.921267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:06.921276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:06.921284 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:41:06.921292 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:41:06.921300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:41:06.921308 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:41:06.921317 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:41:06.921325 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:41:06.921335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:41:06.921343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:06.921352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:41:06.921379 systemd-journald[236]: Collecting audit messages is disabled. Aug 13 00:41:06.921402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:06.921410 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:41:06.921419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:41:06.921427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:06.921437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:41:06.921447 systemd-journald[236]: Journal started Aug 13 00:41:06.921517 systemd-journald[236]: Runtime Journal (/run/log/journal/d63d1c0df37b4e61b420514cee40868f) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:41:06.904316 systemd-modules-load[237]: Inserted module 'overlay' Aug 13 00:41:06.924288 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:41:06.928489 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:41:06.929872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:06.934190 kernel: Bridge firewalling registered Aug 13 00:41:06.931916 systemd-modules-load[237]: Inserted module 'br_netfilter' Aug 13 00:41:06.933025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:06.941655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:41:06.945674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:41:06.948213 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:41:06.956958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:06.960750 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:06.966633 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:41:06.968922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:06.969866 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:41:06.975681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:41:06.983461 dracut-cmdline[270]: dracut-dracut-053 Aug 13 00:41:06.985482 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:41:07.016175 systemd-resolved[276]: Positive Trust Anchors: Aug 13 00:41:07.016201 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:41:07.016258 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:41:07.027043 systemd-resolved[276]: Defaulting to hostname 'linux'. Aug 13 00:41:07.029790 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:41:07.030693 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:41:07.085482 kernel: SCSI subsystem initialized Aug 13 00:41:07.090578 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:41:07.098480 kernel: iscsi: registered transport (tcp) Aug 13 00:41:07.111491 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:41:07.111550 kernel: QLogic iSCSI HBA Driver Aug 13 00:41:07.165143 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:41:07.174782 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:41:07.194084 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:41:07.194164 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:41:07.194193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:41:07.247524 kernel: raid6: neonx8 gen() 15424 MB/s Aug 13 00:41:07.264504 kernel: raid6: neonx4 gen() 15404 MB/s Aug 13 00:41:07.281505 kernel: raid6: neonx2 gen() 13051 MB/s Aug 13 00:41:07.298538 kernel: raid6: neonx1 gen() 10250 MB/s Aug 13 00:41:07.315501 kernel: raid6: int64x8 gen() 6826 MB/s Aug 13 00:41:07.332531 kernel: raid6: int64x4 gen() 7214 MB/s Aug 13 00:41:07.349521 kernel: raid6: int64x2 gen() 5937 MB/s Aug 13 00:41:07.366515 kernel: raid6: int64x1 gen() 4920 MB/s Aug 13 00:41:07.366588 kernel: raid6: using algorithm neonx8 gen() 15424 MB/s Aug 13 00:41:07.383534 kernel: raid6: .... xor() 11732 MB/s, rmw enabled Aug 13 00:41:07.383594 kernel: raid6: using neon recovery algorithm Aug 13 00:41:07.388642 kernel: xor: measuring software checksum speed Aug 13 00:41:07.388694 kernel: 8regs : 19510 MB/sec Aug 13 00:41:07.389573 kernel: 32regs : 19204 MB/sec Aug 13 00:41:07.389603 kernel: arm64_neon : 26901 MB/sec Aug 13 00:41:07.389633 kernel: xor: using function: arm64_neon (26901 MB/sec) Aug 13 00:41:07.441536 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:41:07.456685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:41:07.466746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:41:07.480385 systemd-udevd[456]: Using default interface naming scheme 'v255'. Aug 13 00:41:07.483780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:41:07.492953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:41:07.510034 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Aug 13 00:41:07.545703 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:41:07.550668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:41:07.601403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:07.606795 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:41:07.633120 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:41:07.638212 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:41:07.640675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:07.642822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:41:07.650667 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:41:07.666724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:41:07.713273 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:41:07.713505 kernel: ACPI: bus type USB registered Aug 13 00:41:07.714477 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:41:07.715476 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:41:07.718040 kernel: usbcore: registered new interface driver usbfs Aug 13 00:41:07.718081 kernel: usbcore: registered new interface driver hub Aug 13 00:41:07.718965 kernel: usbcore: registered new device driver usb Aug 13 00:41:07.726607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:41:07.727791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:07.731380 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:41:07.736975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:41:07.737185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:07.738992 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:07.745699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:07.758011 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:41:07.758237 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Aug 13 00:41:07.758333 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 00:41:07.759849 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:41:07.760647 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Aug 13 00:41:07.761675 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Aug 13 00:41:07.764483 kernel: sr 0:0:0:0: Power-on or device reset occurred Aug 13 00:41:07.767788 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Aug 13 00:41:07.768024 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:41:07.768037 kernel: hub 1-0:1.0: USB hub found Aug 13 00:41:07.768753 kernel: hub 1-0:1.0: 4 ports detected Aug 13 00:41:07.771492 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:41:07.774475 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 00:41:07.775126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:07.778075 kernel: hub 2-0:1.0: USB hub found Aug 13 00:41:07.778228 kernel: hub 2-0:1.0: 4 ports detected Aug 13 00:41:07.781849 kernel: sd 0:0:0:1: Power-on or device reset occurred Aug 13 00:41:07.784079 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Aug 13 00:41:07.784269 kernel: sd 0:0:0:1: [sda] Write Protect is off Aug 13 00:41:07.784358 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Aug 13 00:41:07.784485 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:41:07.784383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:41:07.790055 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:41:07.791832 kernel: GPT:17805311 != 80003071 Aug 13 00:41:07.791863 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:41:07.791897 kernel: GPT:17805311 != 80003071 Aug 13 00:41:07.791909 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:41:07.791918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:07.793598 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Aug 13 00:41:07.815985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:07.838487 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (510) Aug 13 00:41:07.842099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:41:07.845480 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (511) Aug 13 00:41:07.848711 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:41:07.862278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:41:07.867895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:41:07.870350 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:41:07.884707 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:41:07.894141 disk-uuid[573]: Primary Header is updated. Aug 13 00:41:07.894141 disk-uuid[573]: Secondary Entries is updated. Aug 13 00:41:07.894141 disk-uuid[573]: Secondary Header is updated. Aug 13 00:41:07.901488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:07.909487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:07.915525 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:08.014605 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 00:41:08.149313 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Aug 13 00:41:08.149378 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Aug 13 00:41:08.150808 kernel: usbcore: registered new interface driver usbhid Aug 13 00:41:08.150871 kernel: usbhid: USB HID core driver Aug 13 00:41:08.256511 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Aug 13 00:41:08.387503 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Aug 13 00:41:08.441484 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Aug 13 00:41:08.922506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:08.924489 disk-uuid[575]: The operation has completed successfully. Aug 13 00:41:08.981154 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:41:08.981281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:41:08.996622 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:41:09.002015 sh[592]: Success Aug 13 00:41:09.015494 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:41:09.086144 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:41:09.089441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:41:09.095650 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:41:09.112087 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:41:09.112151 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:41:09.112171 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:41:09.112718 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:41:09.113520 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:41:09.121522 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:41:09.124605 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:41:09.125669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:41:09.131634 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:41:09.134512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:41:09.150638 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:41:09.150688 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:41:09.150700 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:41:09.154928 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:41:09.154983 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:41:09.164835 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:41:09.165657 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:41:09.175312 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:41:09.183651 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:41:09.259212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:41:09.267676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:41:09.284592 ignition[688]: Ignition 2.19.0 Aug 13 00:41:09.284602 ignition[688]: Stage: fetch-offline Aug 13 00:41:09.284645 ignition[688]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:09.284654 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:09.286317 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:41:09.284824 ignition[688]: parsed url from cmdline: "" Aug 13 00:41:09.284827 ignition[688]: no config URL provided Aug 13 00:41:09.284832 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:41:09.284839 ignition[688]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:41:09.284844 ignition[688]: failed to fetch config: resource requires networking Aug 13 00:41:09.285230 ignition[688]: Ignition finished successfully Aug 13 00:41:09.294715 systemd-networkd[778]: lo: Link UP Aug 13 00:41:09.294726 systemd-networkd[778]: lo: Gained carrier Aug 13 00:41:09.296741 systemd-networkd[778]: Enumeration completed Aug 13 00:41:09.296991 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:41:09.298639 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:09.298643 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:41:09.299823 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:09.299827 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:41:09.300415 systemd-networkd[778]: eth0: Link UP Aug 13 00:41:09.300419 systemd-networkd[778]: eth0: Gained carrier Aug 13 00:41:09.300426 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:09.302127 systemd[1]: Reached target network.target - Network. Aug 13 00:41:09.308699 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:41:09.309721 systemd-networkd[778]: eth1: Link UP Aug 13 00:41:09.309724 systemd-networkd[778]: eth1: Gained carrier Aug 13 00:41:09.309732 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:09.324809 ignition[781]: Ignition 2.19.0 Aug 13 00:41:09.324830 ignition[781]: Stage: fetch Aug 13 00:41:09.325111 ignition[781]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:09.325124 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:09.325226 ignition[781]: parsed url from cmdline: "" Aug 13 00:41:09.325229 ignition[781]: no config URL provided Aug 13 00:41:09.325234 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:41:09.325241 ignition[781]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:41:09.325261 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Aug 13 00:41:09.325959 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:41:09.336560 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:41:09.363542 systemd-networkd[778]: eth0: DHCPv4 address 91.99.159.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:41:09.526250 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Aug 13 00:41:09.533507 ignition[781]: GET result: OK Aug 13 00:41:09.533685 ignition[781]: parsing config with SHA512: 3c4b7f074d83f7e152d0922cc9d70a162bb2a1c36c9b1a047f44f071409c0f99a8fea91c05dc1b8c0e88fb42c845a1d774a040fa0619f93b1769130b1131f79e Aug 13 00:41:09.539255 unknown[781]: fetched base config from "system" Aug 13 00:41:09.539268 unknown[781]: fetched base config from "system" Aug 13 00:41:09.539701 ignition[781]: fetch: fetch complete Aug 13 00:41:09.539273 unknown[781]: fetched user config from "hetzner" Aug 13 00:41:09.539707 ignition[781]: fetch: fetch passed Aug 13 00:41:09.545572 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:41:09.539758 ignition[781]: Ignition finished successfully Aug 13 00:41:09.552627 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:41:09.567786 ignition[788]: Ignition 2.19.0 Aug 13 00:41:09.569053 ignition[788]: Stage: kargs Aug 13 00:41:09.569265 ignition[788]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:09.569277 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:09.570308 ignition[788]: kargs: kargs passed Aug 13 00:41:09.570362 ignition[788]: Ignition finished successfully Aug 13 00:41:09.575489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:41:09.585274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:41:09.597232 ignition[794]: Ignition 2.19.0 Aug 13 00:41:09.597246 ignition[794]: Stage: disks Aug 13 00:41:09.597449 ignition[794]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:09.597485 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:09.601145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:41:09.598690 ignition[794]: disks: disks passed Aug 13 00:41:09.598748 ignition[794]: Ignition finished successfully Aug 13 00:41:09.604384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:41:09.606333 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:41:09.607993 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:41:09.609674 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:41:09.610841 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:41:09.617716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:41:09.634651 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 00:41:09.638819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:41:09.651614 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:41:09.704717 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:41:09.705904 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:41:09.707634 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:41:09.713566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:41:09.716644 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:41:09.722527 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:41:09.723188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:41:09.723217 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:41:09.727921 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:41:09.736410 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Aug 13 00:41:09.735440 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:41:09.741165 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:41:09.741221 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:41:09.742812 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:41:09.748819 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:41:09.748884 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:41:09.752581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:41:09.791742 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:41:09.797580 coreos-metadata[812]: Aug 13 00:41:09.797 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Aug 13 00:41:09.798945 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:41:09.802195 coreos-metadata[812]: Aug 13 00:41:09.800 INFO Fetch successful Aug 13 00:41:09.802195 coreos-metadata[812]: Aug 13 00:41:09.800 INFO wrote hostname ci-4081-3-5-c-674096e178 to /sysroot/etc/hostname Aug 13 00:41:09.803319 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:41:09.807387 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:41:09.812280 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:41:09.913259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:41:09.919642 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:41:09.924038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:41:09.929622 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:41:09.957548 ignition[927]: INFO : Ignition 2.19.0 Aug 13 00:41:09.958550 ignition[927]: INFO : Stage: mount Aug 13 00:41:09.959628 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:09.959628 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:09.962477 ignition[927]: INFO : mount: mount passed Aug 13 00:41:09.962477 ignition[927]: INFO : Ignition finished successfully Aug 13 00:41:09.964540 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:41:09.965969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:41:09.972615 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:41:10.113201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:41:10.125884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:41:10.137485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Aug 13 00:41:10.139934 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:41:10.139995 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:41:10.140016 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:41:10.144142 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:41:10.144214 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:41:10.147475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:41:10.174919 ignition[955]: INFO : Ignition 2.19.0 Aug 13 00:41:10.174919 ignition[955]: INFO : Stage: files Aug 13 00:41:10.177263 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:10.177263 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:10.177263 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:41:10.180098 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:41:10.180098 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:41:10.183553 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:41:10.183553 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:41:10.183553 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:41:10.181769 unknown[955]: wrote ssh authorized keys file for user: core Aug 13 00:41:10.189745 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:41:10.189745 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:41:10.189745 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:41:10.189745 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:41:10.276625 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:41:10.528694 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:41:10.528694 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:41:10.532012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:41:10.622759 systemd-networkd[778]: eth1: Gained IPv6LL Aug 13 00:41:10.807110 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:41:11.014961 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:41:11.014961 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:41:11.019359 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:41:11.019359 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:41:11.019359 ignition[955]: INFO : files: files passed Aug 13 00:41:11.019359 ignition[955]: INFO : Ignition finished successfully Aug 13 00:41:11.021487 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:41:11.034207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:41:11.036688 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:41:11.040715 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:41:11.040818 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:41:11.056364 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:11.057768 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:11.059011 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:11.060849 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:41:11.062415 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:41:11.068707 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:41:11.096543 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:41:11.096727 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:41:11.101329 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:41:11.102749 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:41:11.104040 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:41:11.105846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:41:11.141313 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:41:11.147661 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:41:11.162775 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:41:11.163726 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:11.165187 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:41:11.167622 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:41:11.167764 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:41:11.169595 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:41:11.170266 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:41:11.171340 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:41:11.172447 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:41:11.173635 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:41:11.174903 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:41:11.176045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:41:11.177220 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:41:11.178413 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:41:11.179465 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:41:11.180612 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:41:11.180730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:41:11.182076 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:11.182819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:11.184056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:41:11.187664 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:11.188901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:41:11.189072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:41:11.192938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:41:11.193122 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:41:11.195068 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:41:11.195176 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:41:11.196312 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:41:11.196410 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:41:11.199024 systemd-networkd[778]: eth0: Gained IPv6LL Aug 13 00:41:11.209894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:41:11.215706 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:41:11.216445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:41:11.216612 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:11.221668 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:41:11.221765 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:41:11.229024 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:41:11.229678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:41:11.233026 ignition[1008]: INFO : Ignition 2.19.0 Aug 13 00:41:11.233026 ignition[1008]: INFO : Stage: umount Aug 13 00:41:11.233026 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:11.233026 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:41:11.239445 ignition[1008]: INFO : umount: umount passed Aug 13 00:41:11.239445 ignition[1008]: INFO : Ignition finished successfully Aug 13 00:41:11.235703 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:41:11.235803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:41:11.237283 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:41:11.237368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:41:11.238894 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:41:11.238950 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:41:11.240186 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:41:11.240230 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:41:11.240890 systemd[1]: Stopped target network.target - Network. Aug 13 00:41:11.243635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:41:11.243694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:41:11.245290 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:41:11.247940 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:41:11.254581 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:11.256285 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:41:11.260222 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:41:11.268815 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:41:11.268880 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:41:11.270629 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:41:11.270679 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:41:11.271354 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:41:11.271414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:41:11.272601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:41:11.272651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:41:11.274970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:41:11.278915 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:41:11.282504 systemd-networkd[778]: eth1: DHCPv6 lease lost Aug 13 00:41:11.283849 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:41:11.284510 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:41:11.284604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:41:11.286931 systemd-networkd[778]: eth0: DHCPv6 lease lost Aug 13 00:41:11.288504 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:41:11.288657 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:41:11.293260 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:41:11.293386 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:41:11.297073 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:41:11.297123 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:11.297932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:41:11.297991 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:41:11.304644 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:41:11.305329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:41:11.305386 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:41:11.308355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:41:11.308412 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:11.310142 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:41:11.310198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:11.311556 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:41:11.311599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:41:11.314068 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:41:11.327766 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:41:11.327933 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:41:11.329581 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:41:11.329711 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:41:11.331189 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:41:11.331229 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:11.332312 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:41:11.332344 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:11.334274 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:41:11.334322 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:41:11.337264 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:41:11.337314 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:41:11.339133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:41:11.339181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:11.345677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:41:11.346308 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:41:11.346367 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:11.348337 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:41:11.348383 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:11.350329 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:41:11.350376 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:11.351182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:41:11.351226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:11.356226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:41:11.356321 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:41:11.357284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:41:11.363948 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:41:11.375835 systemd[1]: Switching root. Aug 13 00:41:11.415410 systemd-journald[236]: Journal stopped Aug 13 00:41:12.354319 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Aug 13 00:41:12.354382 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:41:12.354403 kernel: SELinux: policy capability open_perms=1 Aug 13 00:41:12.354418 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:41:12.354430 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:41:12.354440 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:41:12.354462 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:41:12.357080 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:41:12.357098 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:41:12.357108 kernel: audit: type=1403 audit(1755045671.595:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:41:12.357119 systemd[1]: Successfully loaded SELinux policy in 36.262ms. Aug 13 00:41:12.357144 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.831ms. Aug 13 00:41:12.357157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:41:12.357173 systemd[1]: Detected virtualization kvm. Aug 13 00:41:12.357183 systemd[1]: Detected architecture arm64. Aug 13 00:41:12.357194 systemd[1]: Detected first boot. Aug 13 00:41:12.357205 systemd[1]: Hostname set to . Aug 13 00:41:12.357215 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:41:12.357225 zram_generator::config[1072]: No configuration found. Aug 13 00:41:12.357238 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:41:12.357248 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:41:12.357258 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:41:12.357269 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:41:12.357280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:41:12.357290 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:41:12.357300 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:41:12.357311 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:41:12.357323 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:41:12.357335 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:41:12.357346 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:41:12.357356 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:12.357367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:12.357377 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:41:12.357388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:41:12.357399 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:41:12.357410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:41:12.359152 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:41:12.359185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:12.359196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:41:12.359207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:12.359225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:41:12.359236 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:41:12.359247 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:41:12.359261 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:41:12.359271 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:41:12.359282 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:41:12.359292 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:41:12.359303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:12.359314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:12.359326 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:12.359363 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:41:12.359376 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:41:12.359389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:41:12.359400 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:41:12.359411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:41:12.359421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:41:12.359502 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:41:12.359519 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:41:12.359533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:12.359544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:41:12.359555 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:41:12.359565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:12.359575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:41:12.359586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:12.359597 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:41:12.359608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:12.359621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:41:12.359631 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:41:12.359643 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:41:12.359653 kernel: fuse: init (API version 7.39) Aug 13 00:41:12.360523 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:41:12.360559 kernel: loop: module loaded Aug 13 00:41:12.360572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:41:12.360583 kernel: ACPI: bus type drm_connector registered Aug 13 00:41:12.360593 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:41:12.360611 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:41:12.360622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:41:12.360633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:41:12.360663 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:41:12.360678 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:41:12.360722 systemd-journald[1154]: Collecting audit messages is disabled. Aug 13 00:41:12.360750 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:41:12.360763 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:41:12.360775 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:41:12.360785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:12.360796 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:41:12.360807 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:41:12.360818 systemd-journald[1154]: Journal started Aug 13 00:41:12.360841 systemd-journald[1154]: Runtime Journal (/run/log/journal/d63d1c0df37b4e61b420514cee40868f) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:41:12.363751 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:41:12.364313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:12.364552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:12.365880 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:41:12.366031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:41:12.367255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:12.367411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:12.368581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:41:12.368799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:41:12.370016 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:12.372637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:12.373713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:41:12.376030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:12.377255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:41:12.378432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:41:12.391100 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:41:12.396662 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:41:12.398497 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:41:12.400237 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:41:12.408767 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:41:12.416606 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:41:12.420601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:41:12.426632 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:41:12.427311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:41:12.433824 systemd-journald[1154]: Time spent on flushing to /var/log/journal/d63d1c0df37b4e61b420514cee40868f is 34.903ms for 1110 entries. Aug 13 00:41:12.433824 systemd-journald[1154]: System Journal (/var/log/journal/d63d1c0df37b4e61b420514cee40868f) is 8.0M, max 584.8M, 576.8M free. Aug 13 00:41:12.478626 systemd-journald[1154]: Received client request to flush runtime journal. Aug 13 00:41:12.435619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:41:12.444607 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:41:12.449074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:12.450954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:41:12.457582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:41:12.466202 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:41:12.470951 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:41:12.472600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:41:12.484114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:41:12.501851 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Aug 13 00:41:12.502230 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Aug 13 00:41:12.503992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:12.511486 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:12.521814 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:41:12.526514 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:41:12.554132 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:41:12.565082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:41:12.578717 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 00:41:12.578735 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 00:41:12.585694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:12.985399 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:41:12.992653 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:41:13.031567 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Aug 13 00:41:13.060625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:41:13.070636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:41:13.089290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:41:13.128434 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 13 00:41:13.165579 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:41:13.243345 systemd-networkd[1238]: lo: Link UP Aug 13 00:41:13.243353 systemd-networkd[1238]: lo: Gained carrier Aug 13 00:41:13.246263 systemd-networkd[1238]: Enumeration completed Aug 13 00:41:13.246395 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:41:13.246868 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:13.246872 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:41:13.248105 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:13.248115 systemd-networkd[1238]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:41:13.249689 systemd-networkd[1238]: eth0: Link UP Aug 13 00:41:13.252524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1236) Aug 13 00:41:13.249701 systemd-networkd[1238]: eth0: Gained carrier Aug 13 00:41:13.249715 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:13.252737 systemd-networkd[1238]: eth1: Link UP Aug 13 00:41:13.252746 systemd-networkd[1238]: eth1: Gained carrier Aug 13 00:41:13.252761 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:13.254363 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:41:13.287577 systemd-networkd[1238]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:41:13.317344 systemd-networkd[1238]: eth0: DHCPv4 address 91.99.159.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:41:13.318479 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:41:13.336436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:41:13.341740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:13.348754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:13.352258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:13.358945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:13.361116 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Aug 13 00:41:13.361166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:41:13.361179 kernel: [drm] features: -context_init Aug 13 00:41:13.364650 kernel: [drm] number of scanouts: 1 Aug 13 00:41:13.364694 kernel: [drm] number of cap sets: 0 Aug 13 00:41:13.364732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:41:13.364780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:41:13.365112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:13.365269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:13.370468 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Aug 13 00:41:13.383082 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:41:13.396552 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 00:41:13.396804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:13.397073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:13.399868 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:13.402200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:13.403123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:41:13.403163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:41:13.423108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:13.489127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:13.571218 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:41:13.580665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:41:13.596500 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:41:13.624117 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:41:13.627754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:13.640760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:41:13.646177 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:41:13.681094 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:41:13.682364 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:41:13.683394 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:41:13.683554 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:41:13.684276 systemd[1]: Reached target machines.target - Containers. Aug 13 00:41:13.686335 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:41:13.698723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:41:13.703650 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:41:13.704638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:41:13.706723 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:41:13.709715 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:41:13.713842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:41:13.717084 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:41:13.739805 kernel: loop0: detected capacity change from 0 to 8 Aug 13 00:41:13.742509 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:41:13.751965 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:41:13.758297 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:41:13.760145 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:41:13.776839 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:41:13.811479 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:41:13.842977 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:41:13.873530 kernel: loop4: detected capacity change from 0 to 8 Aug 13 00:41:13.878504 kernel: loop5: detected capacity change from 0 to 203944 Aug 13 00:41:13.897685 kernel: loop6: detected capacity change from 0 to 114432 Aug 13 00:41:13.913572 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:41:13.923991 (sd-merge)[1326]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Aug 13 00:41:13.924640 (sd-merge)[1326]: Merged extensions into '/usr'. Aug 13 00:41:13.930221 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:41:13.930236 systemd[1]: Reloading... Aug 13 00:41:14.006567 zram_generator::config[1354]: No configuration found. Aug 13 00:41:14.133536 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:41:14.138813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:41:14.197138 systemd[1]: Reloading finished in 266 ms. Aug 13 00:41:14.217486 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:41:14.221620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:41:14.227938 systemd[1]: Starting ensure-sysext.service... Aug 13 00:41:14.229610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:41:14.239142 systemd[1]: Reloading requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:41:14.239159 systemd[1]: Reloading... Aug 13 00:41:14.254264 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:41:14.254915 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:41:14.255656 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:41:14.256026 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Aug 13 00:41:14.256152 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Aug 13 00:41:14.259900 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:41:14.260013 systemd-tmpfiles[1399]: Skipping /boot Aug 13 00:41:14.269700 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:41:14.269802 systemd-tmpfiles[1399]: Skipping /boot Aug 13 00:41:14.311479 zram_generator::config[1427]: No configuration found. Aug 13 00:41:14.335579 systemd-networkd[1238]: eth0: Gained IPv6LL Aug 13 00:41:14.427627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:41:14.490307 systemd[1]: Reloading finished in 250 ms. Aug 13 00:41:14.511787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:41:14.518249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:41:14.536867 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:41:14.541754 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:41:14.553445 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:41:14.557774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:41:14.562043 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:41:14.571377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:14.581655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:14.594738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:14.609714 augenrules[1500]: No rules Aug 13 00:41:14.610750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:14.612554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:41:14.617795 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:41:14.620760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:41:14.626235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:14.626417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:14.628675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:14.628888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:14.630705 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:14.630934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:14.644975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:14.646757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:14.651208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:14.666219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:14.667206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:41:14.675489 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:41:14.677200 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:41:14.683825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:14.684017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:14.686045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:14.686213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:14.688859 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:14.692424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:14.700951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:41:14.713159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:14.722777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:14.726699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:41:14.738741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:14.741703 systemd-resolved[1487]: Positive Trust Anchors: Aug 13 00:41:14.741716 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:41:14.741748 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:41:14.743878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:14.747315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:41:14.747496 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:41:14.748577 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:41:14.749593 systemd-resolved[1487]: Using system hostname 'ci-4081-3-5-c-674096e178'. Aug 13 00:41:14.752039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:14.752221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:14.754155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:41:14.755807 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:41:14.762029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:41:14.765957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:14.766589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:14.769576 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:14.769990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:14.777234 systemd[1]: Reached target network.target - Network. Aug 13 00:41:14.778098 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:41:14.778951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:41:14.779833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:41:14.780044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:41:14.780797 systemd[1]: Finished ensure-sysext.service. Aug 13 00:41:14.787993 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:41:14.886208 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:41:14.887761 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:41:14.889520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:41:14.890485 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:41:14.891360 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:41:14.892295 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:41:14.892332 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:41:14.892926 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:41:14.893681 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:41:14.894430 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:41:14.895135 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:41:14.896671 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:41:14.898878 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:41:14.900755 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:41:14.903975 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:41:14.904649 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:41:14.905369 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:41:14.906588 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:41:14.906655 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:41:14.906692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:41:14.909522 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:41:14.910630 systemd-networkd[1238]: eth1: Gained IPv6LL Aug 13 00:41:14.913626 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:41:14.918356 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:41:14.924694 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:41:14.933607 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:41:14.934359 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:41:14.939594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:14.943668 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:41:14.950469 jq[1555]: false Aug 13 00:41:14.954599 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:41:14.962304 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:41:14.967257 coreos-metadata[1552]: Aug 13 00:41:14.967 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Aug 13 00:41:14.971651 coreos-metadata[1552]: Aug 13 00:41:14.968 INFO Fetch successful Aug 13 00:41:14.971651 coreos-metadata[1552]: Aug 13 00:41:14.970 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Aug 13 00:41:14.971651 coreos-metadata[1552]: Aug 13 00:41:14.971 INFO Fetch successful Aug 13 00:41:14.970541 dbus-daemon[1554]: [system] SELinux support is enabled Aug 13 00:41:14.982661 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Aug 13 00:41:14.986622 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:41:14.993049 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:41:14.997602 extend-filesystems[1558]: Found loop4 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found loop5 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found loop6 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found loop7 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda1 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda2 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda3 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found usr Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda4 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda6 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda7 Aug 13 00:41:14.997602 extend-filesystems[1558]: Found sda9 Aug 13 00:41:14.997602 extend-filesystems[1558]: Checking size of /dev/sda9 Aug 13 00:41:15.011153 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:41:14.640754 systemd-journald[1154]: Time jumped backwards, rotating. Aug 13 00:41:14.578187 systemd-timesyncd[1547]: Contacted time server 194.50.19.117:123 (0.flatcar.pool.ntp.org). Aug 13 00:41:14.578200 systemd-resolved[1487]: Clock change detected. Flushing caches. Aug 13 00:41:14.580977 systemd-timesyncd[1547]: Initial clock synchronization to Wed 2025-08-13 00:41:14.576857 UTC. Aug 13 00:41:14.582512 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:41:14.590489 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:41:14.594903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:41:14.600777 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:41:14.635124 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:41:14.635442 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:41:14.639210 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:41:14.639488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:41:14.674004 jq[1587]: true Aug 13 00:41:14.674240 extend-filesystems[1558]: Resized partition /dev/sda9 Aug 13 00:41:14.645477 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:41:14.679633 extend-filesystems[1604]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:41:14.725994 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Aug 13 00:41:14.650216 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:41:14.770151 update_engine[1585]: I20250813 00:41:14.740911 1585 main.cc:92] Flatcar Update Engine starting Aug 13 00:41:14.770151 update_engine[1585]: I20250813 00:41:14.753078 1585 update_check_scheduler.cc:74] Next update check in 6m44s Aug 13 00:41:14.650505 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:41:14.683436 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:41:14.820117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1254) Aug 13 00:41:14.820139 tar[1600]: linux-arm64/helm Aug 13 00:41:14.706616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:41:14.820518 jq[1610]: true Aug 13 00:41:14.706673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:41:14.711567 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:41:14.711591 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:41:14.725627 systemd-logind[1581]: New seat seat0. Aug 13 00:41:14.751925 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:41:14.753346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:41:14.760091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:41:14.767557 systemd-logind[1581]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:41:14.767574 systemd-logind[1581]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Aug 13 00:41:14.768069 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:41:14.821546 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:41:14.829546 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:41:14.867340 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Aug 13 00:41:14.891452 extend-filesystems[1604]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:41:14.891452 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 5 Aug 13 00:41:14.891452 extend-filesystems[1604]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Aug 13 00:41:14.886213 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:41:14.898043 extend-filesystems[1558]: Resized filesystem in /dev/sda9 Aug 13 00:41:14.898043 extend-filesystems[1558]: Found sr0 Aug 13 00:41:14.886503 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:41:14.912127 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:41:14.902033 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:41:14.913523 systemd[1]: Starting sshkeys.service... Aug 13 00:41:14.951569 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:41:14.964220 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:41:15.048751 coreos-metadata[1659]: Aug 13 00:41:15.048 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Aug 13 00:41:15.050094 coreos-metadata[1659]: Aug 13 00:41:15.050 INFO Fetch successful Aug 13 00:41:15.064424 unknown[1659]: wrote ssh authorized keys file for user: core Aug 13 00:41:15.120548 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:41:15.116336 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:41:15.130687 systemd[1]: Finished sshkeys.service. Aug 13 00:41:15.169998 containerd[1603]: time="2025-08-13T00:41:15.168187347Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:41:15.197190 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:41:15.252585 containerd[1603]: time="2025-08-13T00:41:15.250725387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.254323 containerd[1603]: time="2025-08-13T00:41:15.254254587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:41:15.254323 containerd[1603]: time="2025-08-13T00:41:15.254318827Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:41:15.254406 containerd[1603]: time="2025-08-13T00:41:15.254338947Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:41:15.254535 containerd[1603]: time="2025-08-13T00:41:15.254510387Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:41:15.254583 containerd[1603]: time="2025-08-13T00:41:15.254541827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.254631 containerd[1603]: time="2025-08-13T00:41:15.254609347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:41:15.254631 containerd[1603]: time="2025-08-13T00:41:15.254628347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.254856627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.254921747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.254938747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.254953787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255033707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255230987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255423667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255441267Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255543667Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:41:15.255890 containerd[1603]: time="2025-08-13T00:41:15.255588707Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:41:15.263607 containerd[1603]: time="2025-08-13T00:41:15.263568467Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:41:15.263693 containerd[1603]: time="2025-08-13T00:41:15.263673347Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:41:15.263717 containerd[1603]: time="2025-08-13T00:41:15.263699867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:41:15.263784 containerd[1603]: time="2025-08-13T00:41:15.263765787Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:41:15.263809 containerd[1603]: time="2025-08-13T00:41:15.263790147Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:41:15.263983 containerd[1603]: time="2025-08-13T00:41:15.263962187Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:41:15.265442 containerd[1603]: time="2025-08-13T00:41:15.265410827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:41:15.265640 containerd[1603]: time="2025-08-13T00:41:15.265617867Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:41:15.265664 containerd[1603]: time="2025-08-13T00:41:15.265647267Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:41:15.265682 containerd[1603]: time="2025-08-13T00:41:15.265662107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:41:15.265682 containerd[1603]: time="2025-08-13T00:41:15.265677067Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265713 containerd[1603]: time="2025-08-13T00:41:15.265690427Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265713 containerd[1603]: time="2025-08-13T00:41:15.265703667Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265748 containerd[1603]: time="2025-08-13T00:41:15.265718347Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265748 containerd[1603]: time="2025-08-13T00:41:15.265733547Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265779 containerd[1603]: time="2025-08-13T00:41:15.265747027Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265779 containerd[1603]: time="2025-08-13T00:41:15.265760427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265779 containerd[1603]: time="2025-08-13T00:41:15.265772267Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:41:15.265822 containerd[1603]: time="2025-08-13T00:41:15.265794107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265822 containerd[1603]: time="2025-08-13T00:41:15.265808987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265858 containerd[1603]: time="2025-08-13T00:41:15.265821227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265858 containerd[1603]: time="2025-08-13T00:41:15.265835627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265858 containerd[1603]: time="2025-08-13T00:41:15.265847907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265916 containerd[1603]: time="2025-08-13T00:41:15.265861307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265916 containerd[1603]: time="2025-08-13T00:41:15.265873387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265955 containerd[1603]: time="2025-08-13T00:41:15.265916787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265955 containerd[1603]: time="2025-08-13T00:41:15.265930627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.265955 containerd[1603]: time="2025-08-13T00:41:15.265946067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266000 containerd[1603]: time="2025-08-13T00:41:15.265960347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266000 containerd[1603]: time="2025-08-13T00:41:15.265972547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266000 containerd[1603]: time="2025-08-13T00:41:15.265984467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266047 containerd[1603]: time="2025-08-13T00:41:15.265999267Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:41:15.266047 containerd[1603]: time="2025-08-13T00:41:15.266021467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266047 containerd[1603]: time="2025-08-13T00:41:15.266033507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266047 containerd[1603]: time="2025-08-13T00:41:15.266045027Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266151867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266176547Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266188827Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266200587Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266209987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266224267Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266235067Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:41:15.266884 containerd[1603]: time="2025-08-13T00:41:15.266245467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:41:15.267046 containerd[1603]: time="2025-08-13T00:41:15.266603427Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:41:15.267046 containerd[1603]: time="2025-08-13T00:41:15.266664987Z" level=info msg="Connect containerd service" Aug 13 00:41:15.267046 containerd[1603]: time="2025-08-13T00:41:15.266760707Z" level=info msg="using legacy CRI server" Aug 13 00:41:15.267046 containerd[1603]: time="2025-08-13T00:41:15.266768307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:41:15.267046 containerd[1603]: time="2025-08-13T00:41:15.266856187Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:41:15.270910 containerd[1603]: time="2025-08-13T00:41:15.270859787Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:41:15.274557 containerd[1603]: time="2025-08-13T00:41:15.274511987Z" level=info msg="Start subscribing containerd event" Aug 13 00:41:15.274596 containerd[1603]: time="2025-08-13T00:41:15.274576467Z" level=info msg="Start recovering state" Aug 13 00:41:15.274688 containerd[1603]: time="2025-08-13T00:41:15.274667987Z" level=info msg="Start event monitor" Aug 13 00:41:15.274729 containerd[1603]: time="2025-08-13T00:41:15.274689867Z" level=info msg="Start snapshots syncer" Aug 13 00:41:15.274729 containerd[1603]: time="2025-08-13T00:41:15.274701267Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:41:15.274729 containerd[1603]: time="2025-08-13T00:41:15.274708787Z" level=info msg="Start streaming server" Aug 13 00:41:15.276153 containerd[1603]: time="2025-08-13T00:41:15.276123947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:41:15.277046 containerd[1603]: time="2025-08-13T00:41:15.276196467Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:41:15.277046 containerd[1603]: time="2025-08-13T00:41:15.276254027Z" level=info msg="containerd successfully booted in 0.111552s" Aug 13 00:41:15.276414 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:41:15.712243 tar[1600]: linux-arm64/LICENSE Aug 13 00:41:15.712243 tar[1600]: linux-arm64/README.md Aug 13 00:41:15.736974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:41:15.888086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:15.891205 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:41:16.444281 kubelet[1693]: E0813 00:41:16.444211 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:41:16.447359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:41:16.447511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:41:16.638658 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:41:16.665499 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:41:16.671279 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:41:16.693308 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:41:16.693605 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:41:16.702363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:41:16.714357 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:41:16.724034 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:41:16.734679 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:41:16.737522 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:41:16.738722 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:41:16.739958 systemd[1]: Startup finished in 5.769s (kernel) + 5.614s (userspace) = 11.384s. Aug 13 00:41:20.694040 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:41:20.710297 systemd[1]: Started sshd@0-91.99.159.132:22-139.178.89.65:39158.service - OpenSSH per-connection server daemon (139.178.89.65:39158). Aug 13 00:41:21.712414 sshd[1726]: Accepted publickey for core from 139.178.89.65 port 39158 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:21.715810 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:21.725405 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:41:21.731161 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:41:21.734678 systemd-logind[1581]: New session 1 of user core. Aug 13 00:41:21.751296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:41:21.758492 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:41:21.764533 (systemd)[1732]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:41:21.874800 systemd[1732]: Queued start job for default target default.target. Aug 13 00:41:21.875200 systemd[1732]: Created slice app.slice - User Application Slice. Aug 13 00:41:21.875219 systemd[1732]: Reached target paths.target - Paths. Aug 13 00:41:21.875274 systemd[1732]: Reached target timers.target - Timers. Aug 13 00:41:21.882030 systemd[1732]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:41:21.892729 systemd[1732]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:41:21.893386 systemd[1732]: Reached target sockets.target - Sockets. Aug 13 00:41:21.893526 systemd[1732]: Reached target basic.target - Basic System. Aug 13 00:41:21.893637 systemd[1732]: Reached target default.target - Main User Target. Aug 13 00:41:21.893671 systemd[1732]: Startup finished in 122ms. Aug 13 00:41:21.893915 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:41:21.901547 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:41:22.599936 systemd[1]: Started sshd@1-91.99.159.132:22-139.178.89.65:39168.service - OpenSSH per-connection server daemon (139.178.89.65:39168). Aug 13 00:41:23.602946 sshd[1744]: Accepted publickey for core from 139.178.89.65 port 39168 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:23.605209 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:23.610913 systemd-logind[1581]: New session 2 of user core. Aug 13 00:41:23.618391 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:41:24.295344 sshd[1744]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:24.301530 systemd[1]: sshd@1-91.99.159.132:22-139.178.89.65:39168.service: Deactivated successfully. Aug 13 00:41:24.304823 systemd-logind[1581]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:41:24.305190 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:41:24.306704 systemd-logind[1581]: Removed session 2. Aug 13 00:41:24.471344 systemd[1]: Started sshd@2-91.99.159.132:22-139.178.89.65:39184.service - OpenSSH per-connection server daemon (139.178.89.65:39184). Aug 13 00:41:25.464027 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 39184 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:25.466088 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:25.472090 systemd-logind[1581]: New session 3 of user core. Aug 13 00:41:25.482406 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:41:26.149579 sshd[1752]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:26.156122 systemd-logind[1581]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:41:26.156875 systemd[1]: sshd@2-91.99.159.132:22-139.178.89.65:39184.service: Deactivated successfully. Aug 13 00:41:26.159325 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:41:26.160427 systemd-logind[1581]: Removed session 3. Aug 13 00:41:26.324469 systemd[1]: Started sshd@3-91.99.159.132:22-139.178.89.65:39190.service - OpenSSH per-connection server daemon (139.178.89.65:39190). Aug 13 00:41:26.698164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:41:26.705210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:26.836081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:26.847596 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:41:26.901654 kubelet[1774]: E0813 00:41:26.901573 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:41:26.905268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:41:26.905481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:41:27.315550 sshd[1760]: Accepted publickey for core from 139.178.89.65 port 39190 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:27.317775 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:27.324479 systemd-logind[1581]: New session 4 of user core. Aug 13 00:41:27.338785 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:41:28.004337 sshd[1760]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:28.009501 systemd[1]: sshd@3-91.99.159.132:22-139.178.89.65:39190.service: Deactivated successfully. Aug 13 00:41:28.013984 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:41:28.014816 systemd-logind[1581]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:41:28.016128 systemd-logind[1581]: Removed session 4. Aug 13 00:41:28.193380 systemd[1]: Started sshd@4-91.99.159.132:22-139.178.89.65:39206.service - OpenSSH per-connection server daemon (139.178.89.65:39206). Aug 13 00:41:29.236798 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 39206 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:29.239216 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:29.245154 systemd-logind[1581]: New session 5 of user core. Aug 13 00:41:29.254481 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:41:29.805832 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:41:29.806632 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:41:29.821753 sudo[1793]: pam_unix(sudo:session): session closed for user root Aug 13 00:41:29.993422 sshd[1789]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:30.000240 systemd[1]: sshd@4-91.99.159.132:22-139.178.89.65:39206.service: Deactivated successfully. Aug 13 00:41:30.005078 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:41:30.006621 systemd-logind[1581]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:41:30.008210 systemd-logind[1581]: Removed session 5. Aug 13 00:41:30.159439 systemd[1]: Started sshd@5-91.99.159.132:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Aug 13 00:41:31.161866 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:31.163915 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:31.169506 systemd-logind[1581]: New session 6 of user core. Aug 13 00:41:31.179469 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:41:31.691524 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:41:31.692026 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:41:31.697040 sudo[1803]: pam_unix(sudo:session): session closed for user root Aug 13 00:41:31.703063 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:41:31.703411 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:41:31.719150 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:41:31.727862 auditctl[1806]: No rules Aug 13 00:41:31.728748 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:41:31.729082 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:41:31.742330 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:41:31.771198 augenrules[1825]: No rules Aug 13 00:41:31.772799 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:41:31.775225 sudo[1802]: pam_unix(sudo:session): session closed for user root Aug 13 00:41:31.938313 sshd[1798]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:31.943721 systemd-logind[1581]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:41:31.944046 systemd[1]: sshd@5-91.99.159.132:22-139.178.89.65:48460.service: Deactivated successfully. Aug 13 00:41:31.948564 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:41:31.950093 systemd-logind[1581]: Removed session 6. Aug 13 00:41:32.108152 systemd[1]: Started sshd@6-91.99.159.132:22-139.178.89.65:48470.service - OpenSSH per-connection server daemon (139.178.89.65:48470). Aug 13 00:41:33.120793 sshd[1834]: Accepted publickey for core from 139.178.89.65 port 48470 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:41:33.122719 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:33.128046 systemd-logind[1581]: New session 7 of user core. Aug 13 00:41:33.135230 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:41:33.652145 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:41:33.652457 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:41:33.950410 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:41:33.951670 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:41:34.196802 dockerd[1854]: time="2025-08-13T00:41:34.196713747Z" level=info msg="Starting up" Aug 13 00:41:34.281000 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2709728576-merged.mount: Deactivated successfully. Aug 13 00:41:34.327928 dockerd[1854]: time="2025-08-13T00:41:34.327737227Z" level=info msg="Loading containers: start." Aug 13 00:41:34.444918 kernel: Initializing XFRM netlink socket Aug 13 00:41:34.525550 systemd-networkd[1238]: docker0: Link UP Aug 13 00:41:34.542008 dockerd[1854]: time="2025-08-13T00:41:34.541726187Z" level=info msg="Loading containers: done." Aug 13 00:41:34.557779 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2397158272-merged.mount: Deactivated successfully. Aug 13 00:41:34.559681 dockerd[1854]: time="2025-08-13T00:41:34.559639187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:41:34.560109 dockerd[1854]: time="2025-08-13T00:41:34.559957067Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:41:34.560288 dockerd[1854]: time="2025-08-13T00:41:34.560210107Z" level=info msg="Daemon has completed initialization" Aug 13 00:41:34.607605 dockerd[1854]: time="2025-08-13T00:41:34.607416947Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:41:34.608597 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:41:35.729523 containerd[1603]: time="2025-08-13T00:41:35.729109467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:41:36.045339 systemd[1]: Started sshd@7-91.99.159.132:22-18.224.184.103:48038.service - OpenSSH per-connection server daemon (18.224.184.103:48038). Aug 13 00:41:36.373168 sshd[1995]: Connection closed by 18.224.184.103 port 48038 Aug 13 00:41:36.374077 systemd[1]: sshd@7-91.99.159.132:22-18.224.184.103:48038.service: Deactivated successfully. Aug 13 00:41:36.461860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711803715.mount: Deactivated successfully. Aug 13 00:41:37.155780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:41:37.165268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:37.287359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:37.300671 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:41:37.360156 kubelet[2060]: E0813 00:41:37.360076 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:41:37.365070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:41:37.365318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:41:37.868354 containerd[1603]: time="2025-08-13T00:41:37.868278107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:37.870666 containerd[1603]: time="2025-08-13T00:41:37.870604827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651905" Aug 13 00:41:37.871430 containerd[1603]: time="2025-08-13T00:41:37.870744147Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:37.874764 containerd[1603]: time="2025-08-13T00:41:37.874690867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:37.876331 containerd[1603]: time="2025-08-13T00:41:37.876045027Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 2.14688596s" Aug 13 00:41:37.876331 containerd[1603]: time="2025-08-13T00:41:37.876096707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:41:37.879001 containerd[1603]: time="2025-08-13T00:41:37.878958467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:41:39.760340 containerd[1603]: time="2025-08-13T00:41:39.760262547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:39.762799 containerd[1603]: time="2025-08-13T00:41:39.762110267Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460303" Aug 13 00:41:39.764353 containerd[1603]: time="2025-08-13T00:41:39.764255747Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:39.774427 containerd[1603]: time="2025-08-13T00:41:39.773249707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:39.775148 containerd[1603]: time="2025-08-13T00:41:39.775099307Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.89609792s" Aug 13 00:41:39.775268 containerd[1603]: time="2025-08-13T00:41:39.775250547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:41:39.776500 containerd[1603]: time="2025-08-13T00:41:39.776461507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:41:41.168844 containerd[1603]: time="2025-08-13T00:41:41.168761187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:41.170534 containerd[1603]: time="2025-08-13T00:41:41.170447667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125109" Aug 13 00:41:41.171640 containerd[1603]: time="2025-08-13T00:41:41.171580987Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:41.175166 containerd[1603]: time="2025-08-13T00:41:41.175073227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:41.176873 containerd[1603]: time="2025-08-13T00:41:41.176710027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.400207s" Aug 13 00:41:41.176873 containerd[1603]: time="2025-08-13T00:41:41.176760307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:41:41.178215 containerd[1603]: time="2025-08-13T00:41:41.178182547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:41:42.210346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344213820.mount: Deactivated successfully. Aug 13 00:41:42.644717 containerd[1603]: time="2025-08-13T00:41:42.644551787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:42.646027 containerd[1603]: time="2025-08-13T00:41:42.645761387Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26916019" Aug 13 00:41:42.646977 containerd[1603]: time="2025-08-13T00:41:42.646918867Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:42.650482 containerd[1603]: time="2025-08-13T00:41:42.649856587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:42.650803 containerd[1603]: time="2025-08-13T00:41:42.650769627Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.47243288s" Aug 13 00:41:42.650930 containerd[1603]: time="2025-08-13T00:41:42.650907067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:41:42.651676 containerd[1603]: time="2025-08-13T00:41:42.651642227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:41:43.242499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370556413.mount: Deactivated successfully. Aug 13 00:41:43.974913 containerd[1603]: time="2025-08-13T00:41:43.974818307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:43.977271 containerd[1603]: time="2025-08-13T00:41:43.976673507Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Aug 13 00:41:43.982327 containerd[1603]: time="2025-08-13T00:41:43.980984787Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:43.984173 containerd[1603]: time="2025-08-13T00:41:43.984124067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:43.985925 containerd[1603]: time="2025-08-13T00:41:43.985770667Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.33393164s" Aug 13 00:41:43.985925 containerd[1603]: time="2025-08-13T00:41:43.985805227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:41:43.987459 containerd[1603]: time="2025-08-13T00:41:43.987102307Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:41:44.554622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941870515.mount: Deactivated successfully. Aug 13 00:41:44.564148 containerd[1603]: time="2025-08-13T00:41:44.564035987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:44.566407 containerd[1603]: time="2025-08-13T00:41:44.565931507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Aug 13 00:41:44.568016 containerd[1603]: time="2025-08-13T00:41:44.567964347Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:44.570738 containerd[1603]: time="2025-08-13T00:41:44.570692707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:44.572195 containerd[1603]: time="2025-08-13T00:41:44.572148787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 584.7892ms" Aug 13 00:41:44.572473 containerd[1603]: time="2025-08-13T00:41:44.572345867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:41:44.573072 containerd[1603]: time="2025-08-13T00:41:44.573028867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:41:45.137447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396703464.mount: Deactivated successfully. Aug 13 00:41:47.551945 containerd[1603]: time="2025-08-13T00:41:47.551848590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:47.553934 containerd[1603]: time="2025-08-13T00:41:47.553672844Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Aug 13 00:41:47.555808 containerd[1603]: time="2025-08-13T00:41:47.555712921Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:47.560542 containerd[1603]: time="2025-08-13T00:41:47.560486659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:41:47.563515 containerd[1603]: time="2025-08-13T00:41:47.563268316Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.99020537s" Aug 13 00:41:47.563515 containerd[1603]: time="2025-08-13T00:41:47.563315672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:41:47.615931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:41:47.623081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:47.743133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:47.744084 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:41:47.799176 kubelet[2217]: E0813 00:41:47.799116 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:41:47.801788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:41:47.802125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:41:53.153831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:53.169288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:53.214154 systemd[1]: Reloading requested from client PID 2246 ('systemctl') (unit session-7.scope)... Aug 13 00:41:53.214174 systemd[1]: Reloading... Aug 13 00:41:53.330932 zram_generator::config[2296]: No configuration found. Aug 13 00:41:53.426693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:41:53.494274 systemd[1]: Reloading finished in 279 ms. Aug 13 00:41:53.560505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:41:53.560695 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:41:53.561330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:53.565270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:41:53.707091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:41:53.718669 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:41:53.777924 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:41:53.777924 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:41:53.777924 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:41:53.777924 kubelet[2347]: I0813 00:41:53.776902 2347 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:41:54.668467 kubelet[2347]: I0813 00:41:54.668429 2347 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:41:54.670456 kubelet[2347]: I0813 00:41:54.668617 2347 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:41:54.670456 kubelet[2347]: I0813 00:41:54.668897 2347 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:41:54.696902 kubelet[2347]: E0813 00:41:54.696831 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.159.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:54.699741 kubelet[2347]: I0813 00:41:54.699692 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:41:54.711389 kubelet[2347]: E0813 00:41:54.711343 2347 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:41:54.711649 kubelet[2347]: I0813 00:41:54.711622 2347 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:41:54.717110 kubelet[2347]: I0813 00:41:54.717084 2347 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:41:54.717900 kubelet[2347]: I0813 00:41:54.717863 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:41:54.718215 kubelet[2347]: I0813 00:41:54.718178 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:41:54.718471 kubelet[2347]: I0813 00:41:54.718296 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-c-674096e178","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:41:54.718674 kubelet[2347]: I0813 00:41:54.718662 2347 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:41:54.718757 kubelet[2347]: I0813 00:41:54.718748 2347 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:41:54.719023 kubelet[2347]: I0813 00:41:54.719008 2347 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:41:54.722157 kubelet[2347]: I0813 00:41:54.722130 2347 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:41:54.722283 kubelet[2347]: I0813 00:41:54.722270 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:41:54.722359 kubelet[2347]: I0813 00:41:54.722349 2347 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:41:54.722489 kubelet[2347]: I0813 00:41:54.722478 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:41:54.728526 kubelet[2347]: W0813 00:41:54.728468 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.159.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-c-674096e178&limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:54.728613 kubelet[2347]: E0813 00:41:54.728536 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.159.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-c-674096e178&limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:54.729132 kubelet[2347]: W0813 00:41:54.729089 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.159.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:54.729202 kubelet[2347]: E0813 00:41:54.729140 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.159.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:54.729370 kubelet[2347]: I0813 00:41:54.729349 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:41:54.730143 kubelet[2347]: I0813 00:41:54.730098 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:41:54.731115 kubelet[2347]: W0813 00:41:54.730269 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:41:54.732648 kubelet[2347]: I0813 00:41:54.732437 2347 server.go:1274] "Started kubelet" Aug 13 00:41:54.734458 kubelet[2347]: I0813 00:41:54.734383 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:41:54.735473 kubelet[2347]: I0813 00:41:54.735447 2347 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:41:54.736492 kubelet[2347]: I0813 00:41:54.735817 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:41:54.736492 kubelet[2347]: I0813 00:41:54.736148 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:41:54.737541 kubelet[2347]: E0813 00:41:54.736295 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.159.132:6443/api/v1/namespaces/default/events\": dial tcp 91.99.159.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-c-674096e178.185b2cc767326d30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-c-674096e178,UID:ci-4081-3-5-c-674096e178,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-c-674096e178,},FirstTimestamp:2025-08-13 00:41:54.732412208 +0000 UTC m=+1.007711707,LastTimestamp:2025-08-13 00:41:54.732412208 +0000 UTC m=+1.007711707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-c-674096e178,}" Aug 13 00:41:54.739128 kubelet[2347]: I0813 00:41:54.739105 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:41:54.739379 kubelet[2347]: I0813 00:41:54.739364 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:41:54.743611 kubelet[2347]: E0813 00:41:54.743537 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:41:54.743997 kubelet[2347]: E0813 00:41:54.743982 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-c-674096e178\" not found" Aug 13 00:41:54.744096 kubelet[2347]: I0813 00:41:54.744086 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:41:54.744337 kubelet[2347]: I0813 00:41:54.744320 2347 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:41:54.744455 kubelet[2347]: I0813 00:41:54.744444 2347 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:41:54.745327 kubelet[2347]: I0813 00:41:54.745305 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:41:54.745751 kubelet[2347]: W0813 00:41:54.745715 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.159.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:54.745867 kubelet[2347]: E0813 00:41:54.745850 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.159.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:54.747090 kubelet[2347]: I0813 00:41:54.747071 2347 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:41:54.747172 kubelet[2347]: I0813 00:41:54.747163 2347 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:41:54.769372 kubelet[2347]: I0813 00:41:54.769301 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:41:54.769598 kubelet[2347]: E0813 00:41:54.769521 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.159.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-c-674096e178?timeout=10s\": dial tcp 91.99.159.132:6443: connect: connection refused" interval="200ms" Aug 13 00:41:54.771556 kubelet[2347]: I0813 00:41:54.771104 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:41:54.771556 kubelet[2347]: I0813 00:41:54.771141 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:41:54.771556 kubelet[2347]: I0813 00:41:54.771164 2347 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:41:54.771556 kubelet[2347]: E0813 00:41:54.771206 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:41:54.778367 kubelet[2347]: W0813 00:41:54.778325 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.159.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:54.778695 kubelet[2347]: E0813 00:41:54.778374 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.159.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:54.778695 kubelet[2347]: I0813 00:41:54.778452 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:41:54.778695 kubelet[2347]: I0813 00:41:54.778461 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:41:54.778695 kubelet[2347]: I0813 00:41:54.778478 2347 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:41:54.780594 kubelet[2347]: I0813 00:41:54.780557 2347 policy_none.go:49] "None policy: Start" Aug 13 00:41:54.781404 kubelet[2347]: I0813 00:41:54.781375 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:41:54.781470 kubelet[2347]: I0813 00:41:54.781410 2347 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:41:54.786482 kubelet[2347]: I0813 00:41:54.786427 2347 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:41:54.786746 kubelet[2347]: I0813 00:41:54.786664 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:41:54.786746 kubelet[2347]: I0813 00:41:54.786693 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:41:54.788462 kubelet[2347]: I0813 00:41:54.788408 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:41:54.794075 kubelet[2347]: E0813 00:41:54.794035 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-c-674096e178\" not found" Aug 13 00:41:54.889000 kubelet[2347]: I0813 00:41:54.888713 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:54.889345 kubelet[2347]: E0813 00:41:54.889307 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.159.132:6443/api/v1/nodes\": dial tcp 91.99.159.132:6443: connect: connection refused" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:54.970409 kubelet[2347]: E0813 00:41:54.970323 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.159.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-c-674096e178?timeout=10s\": dial tcp 91.99.159.132:6443: connect: connection refused" interval="400ms" Aug 13 00:41:55.046705 kubelet[2347]: I0813 00:41:55.046170 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.046705 kubelet[2347]: I0813 00:41:55.046251 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.046705 kubelet[2347]: I0813 00:41:55.046293 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.046705 kubelet[2347]: I0813 00:41:55.046337 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/798a516ac39a3bffa8665017350eceb5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-c-674096e178\" (UID: \"798a516ac39a3bffa8665017350eceb5\") " pod="kube-system/kube-scheduler-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.046705 kubelet[2347]: I0813 00:41:55.046374 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.047172 kubelet[2347]: I0813 00:41:55.046408 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.047172 kubelet[2347]: I0813 00:41:55.046442 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.047172 kubelet[2347]: I0813 00:41:55.046475 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.047172 kubelet[2347]: I0813 00:41:55.046513 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:41:55.092988 kubelet[2347]: I0813 00:41:55.092430 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:55.092988 kubelet[2347]: E0813 00:41:55.092932 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.159.132:6443/api/v1/nodes\": dial tcp 91.99.159.132:6443: connect: connection refused" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:55.184277 containerd[1603]: time="2025-08-13T00:41:55.184217629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-c-674096e178,Uid:798a516ac39a3bffa8665017350eceb5,Namespace:kube-system,Attempt:0,}" Aug 13 00:41:55.189205 containerd[1603]: time="2025-08-13T00:41:55.187454355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-c-674096e178,Uid:d208ac6d4c13fac93c2774d6bdc4e30e,Namespace:kube-system,Attempt:0,}" Aug 13 00:41:55.189205 containerd[1603]: time="2025-08-13T00:41:55.188541423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-c-674096e178,Uid:1accade0534b7cbcfa8369b17ff0de50,Namespace:kube-system,Attempt:0,}" Aug 13 00:41:55.371921 kubelet[2347]: E0813 00:41:55.371740 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.159.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-c-674096e178?timeout=10s\": dial tcp 91.99.159.132:6443: connect: connection refused" interval="800ms" Aug 13 00:41:55.496050 kubelet[2347]: I0813 00:41:55.495991 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:55.496553 kubelet[2347]: E0813 00:41:55.496475 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.159.132:6443/api/v1/nodes\": dial tcp 91.99.159.132:6443: connect: connection refused" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:55.576785 kubelet[2347]: W0813 00:41:55.576696 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.159.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-c-674096e178&limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:55.577101 kubelet[2347]: E0813 00:41:55.577051 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.159.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-c-674096e178&limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:55.662552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029745484.mount: Deactivated successfully. Aug 13 00:41:55.673443 containerd[1603]: time="2025-08-13T00:41:55.673259304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:41:55.674463 containerd[1603]: time="2025-08-13T00:41:55.674352372Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:41:55.676281 containerd[1603]: time="2025-08-13T00:41:55.676219042Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:41:55.677525 containerd[1603]: time="2025-08-13T00:41:55.677466663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:41:55.678436 containerd[1603]: time="2025-08-13T00:41:55.678392739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:41:55.679492 containerd[1603]: time="2025-08-13T00:41:55.679328334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:41:55.680958 containerd[1603]: time="2025-08-13T00:41:55.680787904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Aug 13 00:41:55.682235 containerd[1603]: time="2025-08-13T00:41:55.682116841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:41:55.683962 containerd[1603]: time="2025-08-13T00:41:55.683098834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.153399ms" Aug 13 00:41:55.687235 containerd[1603]: time="2025-08-13T00:41:55.687162960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.532181ms" Aug 13 00:41:55.689295 containerd[1603]: time="2025-08-13T00:41:55.689119066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.5046ms" Aug 13 00:41:55.799553 containerd[1603]: time="2025-08-13T00:41:55.799424476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:41:55.799752 containerd[1603]: time="2025-08-13T00:41:55.799532551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:41:55.799752 containerd[1603]: time="2025-08-13T00:41:55.799551070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.800314 containerd[1603]: time="2025-08-13T00:41:55.799693383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.809471 containerd[1603]: time="2025-08-13T00:41:55.809387080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:41:55.809697 containerd[1603]: time="2025-08-13T00:41:55.809671226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:41:55.809792 containerd[1603]: time="2025-08-13T00:41:55.809766782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.810057 containerd[1603]: time="2025-08-13T00:41:55.810028329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.812509 containerd[1603]: time="2025-08-13T00:41:55.812398776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:41:55.813564 containerd[1603]: time="2025-08-13T00:41:55.813505203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:41:55.813766 containerd[1603]: time="2025-08-13T00:41:55.813712513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.814216 containerd[1603]: time="2025-08-13T00:41:55.814120254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:55.884653 containerd[1603]: time="2025-08-13T00:41:55.884448334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-c-674096e178,Uid:1accade0534b7cbcfa8369b17ff0de50,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f92eddb1e89d76bc4240c5df7763a4f829e08bde9c3dbcacf0fa5e1bd56a2a\"" Aug 13 00:41:55.887230 containerd[1603]: time="2025-08-13T00:41:55.887194563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-c-674096e178,Uid:798a516ac39a3bffa8665017350eceb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"71e832376a483c75b086b2af1564c5d8a97e0a1a96fc1b1f22b2390ec2b492df\"" Aug 13 00:41:55.891933 containerd[1603]: time="2025-08-13T00:41:55.890400489Z" level=info msg="CreateContainer within sandbox \"31f92eddb1e89d76bc4240c5df7763a4f829e08bde9c3dbcacf0fa5e1bd56a2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:41:55.895777 containerd[1603]: time="2025-08-13T00:41:55.895737914Z" level=info msg="CreateContainer within sandbox \"71e832376a483c75b086b2af1564c5d8a97e0a1a96fc1b1f22b2390ec2b492df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:41:55.896313 containerd[1603]: time="2025-08-13T00:41:55.896286008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-c-674096e178,Uid:d208ac6d4c13fac93c2774d6bdc4e30e,Namespace:kube-system,Attempt:0,} returns sandbox id \"debdfc7a6047e5b32270bed0c11cfbe3cfa3eef4e0472a7050a02f7d5d215b87\"" Aug 13 00:41:55.902960 containerd[1603]: time="2025-08-13T00:41:55.902929731Z" level=info msg="CreateContainer within sandbox \"debdfc7a6047e5b32270bed0c11cfbe3cfa3eef4e0472a7050a02f7d5d215b87\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:41:55.919330 containerd[1603]: time="2025-08-13T00:41:55.918967444Z" level=info msg="CreateContainer within sandbox \"31f92eddb1e89d76bc4240c5df7763a4f829e08bde9c3dbcacf0fa5e1bd56a2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40064c0abfd5a35992adfaf14e06737799cc68f1ea4dd011146810e9822791a8\"" Aug 13 00:41:55.922294 containerd[1603]: time="2025-08-13T00:41:55.922224849Z" level=info msg="StartContainer for \"40064c0abfd5a35992adfaf14e06737799cc68f1ea4dd011146810e9822791a8\"" Aug 13 00:41:55.930405 containerd[1603]: time="2025-08-13T00:41:55.930303383Z" level=info msg="CreateContainer within sandbox \"debdfc7a6047e5b32270bed0c11cfbe3cfa3eef4e0472a7050a02f7d5d215b87\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"740c117b1700407e0353d0beabc6aad113d9b361dff49246ff8e58e9b81d8bf3\"" Aug 13 00:41:55.931895 containerd[1603]: time="2025-08-13T00:41:55.931420889Z" level=info msg="StartContainer for \"740c117b1700407e0353d0beabc6aad113d9b361dff49246ff8e58e9b81d8bf3\"" Aug 13 00:41:55.932568 containerd[1603]: time="2025-08-13T00:41:55.932438521Z" level=info msg="CreateContainer within sandbox \"71e832376a483c75b086b2af1564c5d8a97e0a1a96fc1b1f22b2390ec2b492df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70cfcd06a0fc4d302094955e67fbb81bf1593fa1ab0a03c48876074087c16a29\"" Aug 13 00:41:55.933553 containerd[1603]: time="2025-08-13T00:41:55.933522549Z" level=info msg="StartContainer for \"70cfcd06a0fc4d302094955e67fbb81bf1593fa1ab0a03c48876074087c16a29\"" Aug 13 00:41:56.014537 containerd[1603]: time="2025-08-13T00:41:56.014179095Z" level=info msg="StartContainer for \"40064c0abfd5a35992adfaf14e06737799cc68f1ea4dd011146810e9822791a8\" returns successfully" Aug 13 00:41:56.033933 containerd[1603]: time="2025-08-13T00:41:56.033061929Z" level=info msg="StartContainer for \"70cfcd06a0fc4d302094955e67fbb81bf1593fa1ab0a03c48876074087c16a29\" returns successfully" Aug 13 00:41:56.040433 containerd[1603]: time="2025-08-13T00:41:56.040033897Z" level=info msg="StartContainer for \"740c117b1700407e0353d0beabc6aad113d9b361dff49246ff8e58e9b81d8bf3\" returns successfully" Aug 13 00:41:56.056408 kubelet[2347]: W0813 00:41:56.056310 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.159.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.159.132:6443: connect: connection refused Aug 13 00:41:56.056408 kubelet[2347]: E0813 00:41:56.056380 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.159.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.159.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:41:56.300451 kubelet[2347]: I0813 00:41:56.300402 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:58.832891 kubelet[2347]: I0813 00:41:58.831185 2347 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-c-674096e178" Aug 13 00:41:58.832891 kubelet[2347]: E0813 00:41:58.831224 2347 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-c-674096e178\": node \"ci-4081-3-5-c-674096e178\" not found" Aug 13 00:41:58.938460 kubelet[2347]: E0813 00:41:58.938408 2347 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-5-c-674096e178\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-c-674096e178" Aug 13 00:41:59.733522 kubelet[2347]: I0813 00:41:59.733456 2347 apiserver.go:52] "Watching apiserver" Aug 13 00:41:59.745811 kubelet[2347]: I0813 00:41:59.745376 2347 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:42:00.374458 update_engine[1585]: I20250813 00:42:00.374315 1585 update_attempter.cc:509] Updating boot flags... Aug 13 00:42:00.415971 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2630) Aug 13 00:42:01.168792 systemd[1]: Reloading requested from client PID 2637 ('systemctl') (unit session-7.scope)... Aug 13 00:42:01.168815 systemd[1]: Reloading... Aug 13 00:42:01.271008 zram_generator::config[2680]: No configuration found. Aug 13 00:42:01.385365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:42:01.460583 systemd[1]: Reloading finished in 291 ms. Aug 13 00:42:01.494765 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:01.511833 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:42:01.512644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:01.519314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:01.667344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:01.667838 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:42:01.753920 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:01.753920 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:42:01.753920 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:01.753920 kubelet[2732]: I0813 00:42:01.753722 2732 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:42:01.766094 kubelet[2732]: I0813 00:42:01.765933 2732 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:42:01.766428 kubelet[2732]: I0813 00:42:01.766409 2732 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:42:01.767911 kubelet[2732]: I0813 00:42:01.766703 2732 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:42:01.768689 kubelet[2732]: I0813 00:42:01.768671 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:42:01.773047 kubelet[2732]: I0813 00:42:01.773024 2732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:42:01.779162 kubelet[2732]: E0813 00:42:01.779110 2732 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:42:01.779162 kubelet[2732]: I0813 00:42:01.779149 2732 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:42:01.781943 kubelet[2732]: I0813 00:42:01.781873 2732 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:42:01.782398 kubelet[2732]: I0813 00:42:01.782337 2732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:42:01.782543 kubelet[2732]: I0813 00:42:01.782468 2732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:42:01.782695 kubelet[2732]: I0813 00:42:01.782500 2732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-c-674096e178","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:42:01.782695 kubelet[2732]: I0813 00:42:01.782680 2732 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:42:01.782695 kubelet[2732]: I0813 00:42:01.782689 2732 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:42:01.782988 kubelet[2732]: I0813 00:42:01.782755 2732 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:01.783347 kubelet[2732]: I0813 00:42:01.782872 2732 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:42:01.783414 kubelet[2732]: I0813 00:42:01.783348 2732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:42:01.783414 kubelet[2732]: I0813 00:42:01.783382 2732 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:42:01.783414 kubelet[2732]: I0813 00:42:01.783399 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:42:01.789359 kubelet[2732]: I0813 00:42:01.789320 2732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:42:01.789905 kubelet[2732]: I0813 00:42:01.789864 2732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:42:01.790918 kubelet[2732]: I0813 00:42:01.790672 2732 server.go:1274] "Started kubelet" Aug 13 00:42:01.796897 kubelet[2732]: I0813 00:42:01.792463 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:42:01.796897 kubelet[2732]: I0813 00:42:01.792677 2732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:42:01.796897 kubelet[2732]: I0813 00:42:01.794776 2732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:42:01.797215 kubelet[2732]: I0813 00:42:01.797193 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:42:01.799889 kubelet[2732]: I0813 00:42:01.798651 2732 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:42:01.804082 kubelet[2732]: I0813 00:42:01.804058 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:42:01.806634 kubelet[2732]: I0813 00:42:01.806611 2732 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:42:01.808063 kubelet[2732]: E0813 00:42:01.808036 2732 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-c-674096e178\" not found" Aug 13 00:42:01.819890 kubelet[2732]: I0813 00:42:01.819056 2732 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:42:01.819890 kubelet[2732]: I0813 00:42:01.819201 2732 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:42:01.822341 kubelet[2732]: I0813 00:42:01.822177 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:42:01.823098 kubelet[2732]: I0813 00:42:01.823073 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:42:01.823138 kubelet[2732]: I0813 00:42:01.823102 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:42:01.823138 kubelet[2732]: I0813 00:42:01.823120 2732 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:42:01.823190 kubelet[2732]: E0813 00:42:01.823162 2732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:42:01.835869 kubelet[2732]: I0813 00:42:01.834544 2732 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:42:01.836150 kubelet[2732]: I0813 00:42:01.836116 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:42:01.840905 kubelet[2732]: I0813 00:42:01.840857 2732 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:42:01.846590 kubelet[2732]: E0813 00:42:01.846565 2732 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:42:01.907236 kubelet[2732]: I0813 00:42:01.907182 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:42:01.907236 kubelet[2732]: I0813 00:42:01.907204 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:42:01.907236 kubelet[2732]: I0813 00:42:01.907237 2732 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:01.907469 kubelet[2732]: I0813 00:42:01.907394 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:42:01.907469 kubelet[2732]: I0813 00:42:01.907405 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:42:01.907469 kubelet[2732]: I0813 00:42:01.907422 2732 policy_none.go:49] "None policy: Start" Aug 13 00:42:01.908481 kubelet[2732]: I0813 00:42:01.908424 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:42:01.908481 kubelet[2732]: I0813 00:42:01.908456 2732 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:42:01.908684 kubelet[2732]: I0813 00:42:01.908614 2732 state_mem.go:75] "Updated machine memory state" Aug 13 00:42:01.909780 kubelet[2732]: I0813 00:42:01.909735 2732 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:42:01.911054 kubelet[2732]: I0813 00:42:01.911029 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:42:01.911577 kubelet[2732]: I0813 00:42:01.911211 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:42:01.911914 kubelet[2732]: I0813 00:42:01.911863 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:42:02.023592 kubelet[2732]: I0813 00:42:02.022430 2732 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-c-674096e178" Aug 13 00:42:02.033713 kubelet[2732]: I0813 00:42:02.033663 2732 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-c-674096e178" Aug 13 00:42:02.033841 kubelet[2732]: I0813 00:42:02.033820 2732 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120223 kubelet[2732]: I0813 00:42:02.120137 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120372 kubelet[2732]: I0813 00:42:02.120229 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/798a516ac39a3bffa8665017350eceb5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-c-674096e178\" (UID: \"798a516ac39a3bffa8665017350eceb5\") " pod="kube-system/kube-scheduler-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120372 kubelet[2732]: I0813 00:42:02.120286 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120372 kubelet[2732]: I0813 00:42:02.120321 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120372 kubelet[2732]: I0813 00:42:02.120354 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d208ac6d4c13fac93c2774d6bdc4e30e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-c-674096e178\" (UID: \"d208ac6d4c13fac93c2774d6bdc4e30e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120585 kubelet[2732]: I0813 00:42:02.120394 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120585 kubelet[2732]: I0813 00:42:02.120424 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120585 kubelet[2732]: I0813 00:42:02.120454 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.120585 kubelet[2732]: I0813 00:42:02.120489 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1accade0534b7cbcfa8369b17ff0de50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-c-674096e178\" (UID: \"1accade0534b7cbcfa8369b17ff0de50\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.786865 kubelet[2732]: I0813 00:42:02.786802 2732 apiserver.go:52] "Watching apiserver" Aug 13 00:42:02.820765 kubelet[2732]: I0813 00:42:02.820673 2732 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:42:02.877336 kubelet[2732]: I0813 00:42:02.875522 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" podStartSLOduration=1.8755044349999999 podStartE2EDuration="1.875504435s" podCreationTimestamp="2025-08-13 00:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:02.864038144 +0000 UTC m=+1.186637255" watchObservedRunningTime="2025-08-13 00:42:02.875504435 +0000 UTC m=+1.198103546" Aug 13 00:42:02.890239 kubelet[2732]: I0813 00:42:02.890147 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" podStartSLOduration=1.8901200710000001 podStartE2EDuration="1.890120071s" podCreationTimestamp="2025-08-13 00:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:02.889124541 +0000 UTC m=+1.211723652" watchObservedRunningTime="2025-08-13 00:42:02.890120071 +0000 UTC m=+1.212719222" Aug 13 00:42:02.890998 kubelet[2732]: I0813 00:42:02.890280 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-c-674096e178" podStartSLOduration=1.890273066 podStartE2EDuration="1.890273066s" podCreationTimestamp="2025-08-13 00:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:02.875849745 +0000 UTC m=+1.198448856" watchObservedRunningTime="2025-08-13 00:42:02.890273066 +0000 UTC m=+1.212872217" Aug 13 00:42:02.892780 kubelet[2732]: E0813 00:42:02.891974 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-c-674096e178\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-c-674096e178" Aug 13 00:42:02.896229 kubelet[2732]: E0813 00:42:02.895315 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-c-674096e178\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-c-674096e178" Aug 13 00:42:07.289118 kubelet[2732]: I0813 00:42:07.289008 2732 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:42:07.290225 containerd[1603]: time="2025-08-13T00:42:07.289872190Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:42:07.290678 kubelet[2732]: I0813 00:42:07.290139 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:42:08.259985 kubelet[2732]: I0813 00:42:08.259194 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aedaacc2-dc75-4bc6-a1cd-7ad677945407-xtables-lock\") pod \"kube-proxy-chhdj\" (UID: \"aedaacc2-dc75-4bc6-a1cd-7ad677945407\") " pod="kube-system/kube-proxy-chhdj" Aug 13 00:42:08.259985 kubelet[2732]: I0813 00:42:08.259237 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aedaacc2-dc75-4bc6-a1cd-7ad677945407-kube-proxy\") pod \"kube-proxy-chhdj\" (UID: \"aedaacc2-dc75-4bc6-a1cd-7ad677945407\") " pod="kube-system/kube-proxy-chhdj" Aug 13 00:42:08.259985 kubelet[2732]: I0813 00:42:08.259258 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedaacc2-dc75-4bc6-a1cd-7ad677945407-lib-modules\") pod \"kube-proxy-chhdj\" (UID: \"aedaacc2-dc75-4bc6-a1cd-7ad677945407\") " pod="kube-system/kube-proxy-chhdj" Aug 13 00:42:08.259985 kubelet[2732]: I0813 00:42:08.259276 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mfr\" (UniqueName: \"kubernetes.io/projected/aedaacc2-dc75-4bc6-a1cd-7ad677945407-kube-api-access-v4mfr\") pod \"kube-proxy-chhdj\" (UID: \"aedaacc2-dc75-4bc6-a1cd-7ad677945407\") " pod="kube-system/kube-proxy-chhdj" Aug 13 00:42:08.461046 kubelet[2732]: I0813 00:42:08.460912 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f3a818f-bdae-452a-967e-bddd37224907-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-w7w9w\" (UID: \"1f3a818f-bdae-452a-967e-bddd37224907\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w7w9w" Aug 13 00:42:08.461611 kubelet[2732]: I0813 00:42:08.461045 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42j5\" (UniqueName: \"kubernetes.io/projected/1f3a818f-bdae-452a-967e-bddd37224907-kube-api-access-l42j5\") pod \"tigera-operator-5bf8dfcb4-w7w9w\" (UID: \"1f3a818f-bdae-452a-967e-bddd37224907\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w7w9w" Aug 13 00:42:08.497994 containerd[1603]: time="2025-08-13T00:42:08.497920628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chhdj,Uid:aedaacc2-dc75-4bc6-a1cd-7ad677945407,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:08.529477 containerd[1603]: time="2025-08-13T00:42:08.529029106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:08.529477 containerd[1603]: time="2025-08-13T00:42:08.529174583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:08.529477 containerd[1603]: time="2025-08-13T00:42:08.529202982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:08.529477 containerd[1603]: time="2025-08-13T00:42:08.529352099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:08.576913 containerd[1603]: time="2025-08-13T00:42:08.576444967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chhdj,Uid:aedaacc2-dc75-4bc6-a1cd-7ad677945407,Namespace:kube-system,Attempt:0,} returns sandbox id \"64f31d13e681623b042dcaafb0dbca40d70a57df09fb1558a83e183b8e286dbc\"" Aug 13 00:42:08.587003 containerd[1603]: time="2025-08-13T00:42:08.586925430Z" level=info msg="CreateContainer within sandbox \"64f31d13e681623b042dcaafb0dbca40d70a57df09fb1558a83e183b8e286dbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:42:08.607949 containerd[1603]: time="2025-08-13T00:42:08.607750360Z" level=info msg="CreateContainer within sandbox \"64f31d13e681623b042dcaafb0dbca40d70a57df09fb1558a83e183b8e286dbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3529aa6830867cbd33ce87384b5c62bb733fff8b8b83e8e05c854b1435026e4\"" Aug 13 00:42:08.609973 containerd[1603]: time="2025-08-13T00:42:08.609849237Z" level=info msg="StartContainer for \"c3529aa6830867cbd33ce87384b5c62bb733fff8b8b83e8e05c854b1435026e4\"" Aug 13 00:42:08.681031 containerd[1603]: time="2025-08-13T00:42:08.680709654Z" level=info msg="StartContainer for \"c3529aa6830867cbd33ce87384b5c62bb733fff8b8b83e8e05c854b1435026e4\" returns successfully" Aug 13 00:42:08.723513 containerd[1603]: time="2025-08-13T00:42:08.723463011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w7w9w,Uid:1f3a818f-bdae-452a-967e-bddd37224907,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:42:08.766631 containerd[1603]: time="2025-08-13T00:42:08.766248048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:08.766631 containerd[1603]: time="2025-08-13T00:42:08.766495203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:08.767160 containerd[1603]: time="2025-08-13T00:42:08.766931954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:08.769435 containerd[1603]: time="2025-08-13T00:42:08.768446003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:08.839952 containerd[1603]: time="2025-08-13T00:42:08.839533015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w7w9w,Uid:1f3a818f-bdae-452a-967e-bddd37224907,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"609b75c7d4537c3d528c5522f2d97b8160287a302f3a5bf4c684877f6e53d041\"" Aug 13 00:42:08.843843 containerd[1603]: time="2025-08-13T00:42:08.843801927Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:42:10.508581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461854196.mount: Deactivated successfully. Aug 13 00:42:10.945738 containerd[1603]: time="2025-08-13T00:42:10.945638502Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:10.947320 containerd[1603]: time="2025-08-13T00:42:10.947261513Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:42:10.948760 containerd[1603]: time="2025-08-13T00:42:10.948698327Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:10.952149 containerd[1603]: time="2025-08-13T00:42:10.952083905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:10.953121 containerd[1603]: time="2025-08-13T00:42:10.953074847Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.109064885s" Aug 13 00:42:10.953184 containerd[1603]: time="2025-08-13T00:42:10.953117367Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:42:10.957552 containerd[1603]: time="2025-08-13T00:42:10.957433608Z" level=info msg="CreateContainer within sandbox \"609b75c7d4537c3d528c5522f2d97b8160287a302f3a5bf4c684877f6e53d041\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:42:10.976593 containerd[1603]: time="2025-08-13T00:42:10.976544462Z" level=info msg="CreateContainer within sandbox \"609b75c7d4537c3d528c5522f2d97b8160287a302f3a5bf4c684877f6e53d041\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cff86184e0acd2d6f5b8aa479663551c2dbb653399b9189e627e9b5ceb89d250\"" Aug 13 00:42:10.977099 containerd[1603]: time="2025-08-13T00:42:10.977065012Z" level=info msg="StartContainer for \"cff86184e0acd2d6f5b8aa479663551c2dbb653399b9189e627e9b5ceb89d250\"" Aug 13 00:42:11.037791 containerd[1603]: time="2025-08-13T00:42:11.037263041Z" level=info msg="StartContainer for \"cff86184e0acd2d6f5b8aa479663551c2dbb653399b9189e627e9b5ceb89d250\" returns successfully" Aug 13 00:42:11.923107 kubelet[2732]: I0813 00:42:11.923012 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-w7w9w" podStartSLOduration=1.811623652 podStartE2EDuration="3.922989893s" podCreationTimestamp="2025-08-13 00:42:08 +0000 UTC" firstStartedPulling="2025-08-13 00:42:08.842963904 +0000 UTC m=+7.165563015" lastFinishedPulling="2025-08-13 00:42:10.954330145 +0000 UTC m=+9.276929256" observedRunningTime="2025-08-13 00:42:11.922647299 +0000 UTC m=+10.245246450" watchObservedRunningTime="2025-08-13 00:42:11.922989893 +0000 UTC m=+10.245589004" Aug 13 00:42:11.923691 kubelet[2732]: I0813 00:42:11.923324 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chhdj" podStartSLOduration=3.923315288 podStartE2EDuration="3.923315288s" podCreationTimestamp="2025-08-13 00:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:08.916424907 +0000 UTC m=+7.239024058" watchObservedRunningTime="2025-08-13 00:42:11.923315288 +0000 UTC m=+10.245914399" Aug 13 00:42:17.342339 sudo[1838]: pam_unix(sudo:session): session closed for user root Aug 13 00:42:17.505116 sshd[1834]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:17.516112 systemd-logind[1581]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:42:17.516802 systemd[1]: sshd@6-91.99.159.132:22-139.178.89.65:48470.service: Deactivated successfully. Aug 13 00:42:17.528663 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:42:17.530742 systemd-logind[1581]: Removed session 7. Aug 13 00:42:25.160936 kubelet[2732]: I0813 00:42:25.160834 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f5fae5-467a-456a-b9a8-9d19b1329fdb-tigera-ca-bundle\") pod \"calico-typha-5698c6986f-wllld\" (UID: \"f5f5fae5-467a-456a-b9a8-9d19b1329fdb\") " pod="calico-system/calico-typha-5698c6986f-wllld" Aug 13 00:42:25.160936 kubelet[2732]: I0813 00:42:25.160901 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd544\" (UniqueName: \"kubernetes.io/projected/f5f5fae5-467a-456a-b9a8-9d19b1329fdb-kube-api-access-xd544\") pod \"calico-typha-5698c6986f-wllld\" (UID: \"f5f5fae5-467a-456a-b9a8-9d19b1329fdb\") " pod="calico-system/calico-typha-5698c6986f-wllld" Aug 13 00:42:25.160936 kubelet[2732]: I0813 00:42:25.160927 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5f5fae5-467a-456a-b9a8-9d19b1329fdb-typha-certs\") pod \"calico-typha-5698c6986f-wllld\" (UID: \"f5f5fae5-467a-456a-b9a8-9d19b1329fdb\") " pod="calico-system/calico-typha-5698c6986f-wllld" Aug 13 00:42:25.261900 kubelet[2732]: I0813 00:42:25.261832 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-lib-modules\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.261900 kubelet[2732]: I0813 00:42:25.261900 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-xtables-lock\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262054 kubelet[2732]: I0813 00:42:25.261923 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-cni-bin-dir\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262054 kubelet[2732]: I0813 00:42:25.261958 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-cni-net-dir\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262054 kubelet[2732]: I0813 00:42:25.261975 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-policysync\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262054 kubelet[2732]: I0813 00:42:25.261989 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83b934d2-7abf-4ead-a257-fe2ab05c43ab-tigera-ca-bundle\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262054 kubelet[2732]: I0813 00:42:25.262004 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-var-run-calico\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262245 kubelet[2732]: I0813 00:42:25.262031 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83b934d2-7abf-4ead-a257-fe2ab05c43ab-node-certs\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262245 kubelet[2732]: I0813 00:42:25.262047 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcpfx\" (UniqueName: \"kubernetes.io/projected/83b934d2-7abf-4ead-a257-fe2ab05c43ab-kube-api-access-kcpfx\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262245 kubelet[2732]: I0813 00:42:25.262074 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-cni-log-dir\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262245 kubelet[2732]: I0813 00:42:25.262090 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-flexvol-driver-host\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.262245 kubelet[2732]: I0813 00:42:25.262104 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83b934d2-7abf-4ead-a257-fe2ab05c43ab-var-lib-calico\") pod \"calico-node-k4bq9\" (UID: \"83b934d2-7abf-4ead-a257-fe2ab05c43ab\") " pod="calico-system/calico-node-k4bq9" Aug 13 00:42:25.299210 kubelet[2732]: E0813 00:42:25.299039 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:25.313013 containerd[1603]: time="2025-08-13T00:42:25.312761231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5698c6986f-wllld,Uid:f5f5fae5-467a-456a-b9a8-9d19b1329fdb,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:25.368922 kubelet[2732]: E0813 00:42:25.368822 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.368922 kubelet[2732]: W0813 00:42:25.368856 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.370844 kubelet[2732]: E0813 00:42:25.369020 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.371024 containerd[1603]: time="2025-08-13T00:42:25.369858797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:25.371024 containerd[1603]: time="2025-08-13T00:42:25.369947997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:25.371024 containerd[1603]: time="2025-08-13T00:42:25.369963517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:25.371024 containerd[1603]: time="2025-08-13T00:42:25.370053356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:25.373434 kubelet[2732]: E0813 00:42:25.372170 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.373434 kubelet[2732]: W0813 00:42:25.372909 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.373434 kubelet[2732]: E0813 00:42:25.372934 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.378774 kubelet[2732]: E0813 00:42:25.378383 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.378774 kubelet[2732]: W0813 00:42:25.378408 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.378774 kubelet[2732]: E0813 00:42:25.378432 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.385913 kubelet[2732]: E0813 00:42:25.384184 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.385913 kubelet[2732]: W0813 00:42:25.384203 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.390349 kubelet[2732]: E0813 00:42:25.388308 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.390349 kubelet[2732]: E0813 00:42:25.388420 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.390349 kubelet[2732]: W0813 00:42:25.388429 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.390349 kubelet[2732]: E0813 00:42:25.389354 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.390349 kubelet[2732]: W0813 00:42:25.389368 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.390349 kubelet[2732]: E0813 00:42:25.390007 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.390349 kubelet[2732]: W0813 00:42:25.390021 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.390847 kubelet[2732]: E0813 00:42:25.390691 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.390847 kubelet[2732]: E0813 00:42:25.390718 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.390847 kubelet[2732]: W0813 00:42:25.390803 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.391072 kubelet[2732]: E0813 00:42:25.390743 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.391072 kubelet[2732]: E0813 00:42:25.390730 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.391072 kubelet[2732]: E0813 00:42:25.391015 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.392185 kubelet[2732]: E0813 00:42:25.391587 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.392185 kubelet[2732]: W0813 00:42:25.391600 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.392185 kubelet[2732]: E0813 00:42:25.391641 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.393499 kubelet[2732]: E0813 00:42:25.393475 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.393584 kubelet[2732]: W0813 00:42:25.393562 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.394064 kubelet[2732]: E0813 00:42:25.393652 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.398099 kubelet[2732]: E0813 00:42:25.397977 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.398099 kubelet[2732]: W0813 00:42:25.398000 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.398099 kubelet[2732]: E0813 00:42:25.398042 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.398521 kubelet[2732]: E0813 00:42:25.398494 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.398869 kubelet[2732]: W0813 00:42:25.398680 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.398869 kubelet[2732]: E0813 00:42:25.398767 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.401438 kubelet[2732]: E0813 00:42:25.400349 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.401438 kubelet[2732]: W0813 00:42:25.400362 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.403804 kubelet[2732]: E0813 00:42:25.402796 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.403804 kubelet[2732]: W0813 00:42:25.402995 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.405041 kubelet[2732]: E0813 00:42:25.404185 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.405285 kubelet[2732]: W0813 00:42:25.405161 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.405285 kubelet[2732]: E0813 00:42:25.405189 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.405285 kubelet[2732]: E0813 00:42:25.405231 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.409998 kubelet[2732]: E0813 00:42:25.409978 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.410166 kubelet[2732]: W0813 00:42:25.410104 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.410599 kubelet[2732]: E0813 00:42:25.410579 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.411517 kubelet[2732]: E0813 00:42:25.411448 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.411916 kubelet[2732]: W0813 00:42:25.411894 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.413544 kubelet[2732]: E0813 00:42:25.413408 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.414684 kubelet[2732]: E0813 00:42:25.414654 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.428264 kubelet[2732]: E0813 00:42:25.428232 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.428264 kubelet[2732]: W0813 00:42:25.428252 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.428367 kubelet[2732]: E0813 00:42:25.428270 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.429290 kubelet[2732]: E0813 00:42:25.429265 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.429290 kubelet[2732]: W0813 00:42:25.429286 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.430001 kubelet[2732]: E0813 00:42:25.429948 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.468267 kubelet[2732]: E0813 00:42:25.468221 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.468267 kubelet[2732]: W0813 00:42:25.468248 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.468267 kubelet[2732]: E0813 00:42:25.468271 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.468747 kubelet[2732]: I0813 00:42:25.468309 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r956z\" (UniqueName: \"kubernetes.io/projected/4817502a-aff6-4c70-b804-8c5d92350237-kube-api-access-r956z\") pod \"csi-node-driver-cldrb\" (UID: \"4817502a-aff6-4c70-b804-8c5d92350237\") " pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:25.469643 kubelet[2732]: E0813 00:42:25.469205 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.469643 kubelet[2732]: W0813 00:42:25.469227 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.469643 kubelet[2732]: E0813 00:42:25.469249 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.469643 kubelet[2732]: I0813 00:42:25.469274 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4817502a-aff6-4c70-b804-8c5d92350237-kubelet-dir\") pod \"csi-node-driver-cldrb\" (UID: \"4817502a-aff6-4c70-b804-8c5d92350237\") " pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:25.470179 kubelet[2732]: E0813 00:42:25.470003 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.470179 kubelet[2732]: W0813 00:42:25.470017 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.470590 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.472199 kubelet[2732]: W0813 00:42:25.470610 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.470625 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.471635 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.472199 kubelet[2732]: W0813 00:42:25.471647 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.471660 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.472014 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.472199 kubelet[2732]: I0813 00:42:25.472090 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4817502a-aff6-4c70-b804-8c5d92350237-socket-dir\") pod \"csi-node-driver-cldrb\" (UID: \"4817502a-aff6-4c70-b804-8c5d92350237\") " pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:25.472199 kubelet[2732]: E0813 00:42:25.472146 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.474808 kubelet[2732]: W0813 00:42:25.472157 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.474808 kubelet[2732]: E0813 00:42:25.472183 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.474808 kubelet[2732]: E0813 00:42:25.473022 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.474808 kubelet[2732]: W0813 00:42:25.473034 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.474808 kubelet[2732]: E0813 00:42:25.473059 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.474808 kubelet[2732]: E0813 00:42:25.473364 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.474808 kubelet[2732]: W0813 00:42:25.473374 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.474808 kubelet[2732]: E0813 00:42:25.473508 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.474808 kubelet[2732]: I0813 00:42:25.473536 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4817502a-aff6-4c70-b804-8c5d92350237-registration-dir\") pod \"csi-node-driver-cldrb\" (UID: \"4817502a-aff6-4c70-b804-8c5d92350237\") " pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.474168 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.476746 kubelet[2732]: W0813 00:42:25.474181 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.474303 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.474825 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.476746 kubelet[2732]: W0813 00:42:25.474836 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.475215 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.475524 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.476746 kubelet[2732]: W0813 00:42:25.475635 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.476746 kubelet[2732]: E0813 00:42:25.475794 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.480374 kubelet[2732]: I0813 00:42:25.475816 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4817502a-aff6-4c70-b804-8c5d92350237-varrun\") pod \"csi-node-driver-cldrb\" (UID: \"4817502a-aff6-4c70-b804-8c5d92350237\") " pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:25.480374 kubelet[2732]: E0813 00:42:25.476472 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.480374 kubelet[2732]: W0813 00:42:25.476581 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.480374 kubelet[2732]: E0813 00:42:25.477442 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.480374 kubelet[2732]: W0813 00:42:25.477557 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.480374 kubelet[2732]: E0813 00:42:25.477569 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.480374 kubelet[2732]: E0813 00:42:25.477585 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.480374 kubelet[2732]: E0813 00:42:25.478637 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.480374 kubelet[2732]: W0813 00:42:25.478650 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.482124 kubelet[2732]: E0813 00:42:25.479128 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.482124 kubelet[2732]: E0813 00:42:25.481246 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.482124 kubelet[2732]: W0813 00:42:25.481259 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.482124 kubelet[2732]: E0813 00:42:25.481694 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.491913 containerd[1603]: time="2025-08-13T00:42:25.490582485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5698c6986f-wllld,Uid:f5f5fae5-467a-456a-b9a8-9d19b1329fdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"81a56677d4f56bf703eb634eceefdd85ee88e2f07c36fa5f5c0287e347444dab\"" Aug 13 00:42:25.495063 containerd[1603]: time="2025-08-13T00:42:25.494828776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:42:25.577535 kubelet[2732]: E0813 00:42:25.577505 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.578791 kubelet[2732]: W0813 00:42:25.577696 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.578791 kubelet[2732]: E0813 00:42:25.578614 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.579367 kubelet[2732]: E0813 00:42:25.579246 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.579367 kubelet[2732]: W0813 00:42:25.579268 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.579367 kubelet[2732]: E0813 00:42:25.579295 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.580334 kubelet[2732]: E0813 00:42:25.580185 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.580334 kubelet[2732]: W0813 00:42:25.580202 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.580334 kubelet[2732]: E0813 00:42:25.580228 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.580783 kubelet[2732]: E0813 00:42:25.580623 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.580783 kubelet[2732]: W0813 00:42:25.580636 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.580783 kubelet[2732]: E0813 00:42:25.580660 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.581276 kubelet[2732]: E0813 00:42:25.581178 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.581276 kubelet[2732]: W0813 00:42:25.581191 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.581276 kubelet[2732]: E0813 00:42:25.581214 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.582587 kubelet[2732]: E0813 00:42:25.582464 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.582587 kubelet[2732]: W0813 00:42:25.582478 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.582587 kubelet[2732]: E0813 00:42:25.582584 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584012 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.584613 kubelet[2732]: W0813 00:42:25.584031 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584099 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584239 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.584613 kubelet[2732]: W0813 00:42:25.584246 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584258 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584426 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.584613 kubelet[2732]: W0813 00:42:25.584434 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584448 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.584613 kubelet[2732]: E0813 00:42:25.584602 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587259 kubelet[2732]: W0813 00:42:25.584618 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.584634 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.584863 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587259 kubelet[2732]: W0813 00:42:25.584872 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.584898 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.585065 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587259 kubelet[2732]: W0813 00:42:25.585073 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.585088 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587259 kubelet[2732]: E0813 00:42:25.585279 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587259 kubelet[2732]: W0813 00:42:25.585288 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585343 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585478 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587539 kubelet[2732]: W0813 00:42:25.585494 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585600 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585655 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587539 kubelet[2732]: W0813 00:42:25.585661 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585674 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585850 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.587539 kubelet[2732]: W0813 00:42:25.585859 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.587539 kubelet[2732]: E0813 00:42:25.585873 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.586053 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.589306 kubelet[2732]: W0813 00:42:25.586060 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.586074 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.586221 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.589306 kubelet[2732]: W0813 00:42:25.586233 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.586257 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.587028 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.589306 kubelet[2732]: W0813 00:42:25.587039 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.587058 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.589306 kubelet[2732]: E0813 00:42:25.587307 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591124 kubelet[2732]: W0813 00:42:25.587315 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.587393 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.587494 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591124 kubelet[2732]: W0813 00:42:25.587502 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.587579 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.587665 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591124 kubelet[2732]: W0813 00:42:25.587672 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.587683 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.591124 kubelet[2732]: E0813 00:42:25.588026 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591124 kubelet[2732]: W0813 00:42:25.588037 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591320 kubelet[2732]: E0813 00:42:25.588055 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.591320 kubelet[2732]: E0813 00:42:25.589767 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591320 kubelet[2732]: W0813 00:42:25.589779 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591320 kubelet[2732]: E0813 00:42:25.589794 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.591320 kubelet[2732]: E0813 00:42:25.590111 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.591320 kubelet[2732]: W0813 00:42:25.590121 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.591320 kubelet[2732]: E0813 00:42:25.590131 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.606264 kubelet[2732]: E0813 00:42:25.605154 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:25.606264 kubelet[2732]: W0813 00:42:25.606259 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:25.606374 kubelet[2732]: E0813 00:42:25.606282 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:25.729008 containerd[1603]: time="2025-08-13T00:42:25.728925603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4bq9,Uid:83b934d2-7abf-4ead-a257-fe2ab05c43ab,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:25.766766 containerd[1603]: time="2025-08-13T00:42:25.766518223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:25.767727 containerd[1603]: time="2025-08-13T00:42:25.767454977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:25.768034 containerd[1603]: time="2025-08-13T00:42:25.767492977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:25.768340 containerd[1603]: time="2025-08-13T00:42:25.768195652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:25.846944 containerd[1603]: time="2025-08-13T00:42:25.846897190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4bq9,Uid:83b934d2-7abf-4ead-a257-fe2ab05c43ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\"" Aug 13 00:42:26.276296 systemd[1]: run-containerd-runc-k8s.io-81a56677d4f56bf703eb634eceefdd85ee88e2f07c36fa5f5c0287e347444dab-runc.yRd9Dw.mount: Deactivated successfully. Aug 13 00:42:26.824814 kubelet[2732]: E0813 00:42:26.824577 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:26.913337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113515570.mount: Deactivated successfully. Aug 13 00:42:28.104851 containerd[1603]: time="2025-08-13T00:42:28.104726059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:28.112527 containerd[1603]: time="2025-08-13T00:42:28.112476495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:42:28.113916 containerd[1603]: time="2025-08-13T00:42:28.112801813Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:28.116380 containerd[1603]: time="2025-08-13T00:42:28.116322233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:28.117633 containerd[1603]: time="2025-08-13T00:42:28.117532627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.622663251s" Aug 13 00:42:28.117633 containerd[1603]: time="2025-08-13T00:42:28.117607826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:42:28.120202 containerd[1603]: time="2025-08-13T00:42:28.120029252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:42:28.136359 containerd[1603]: time="2025-08-13T00:42:28.135704963Z" level=info msg="CreateContainer within sandbox \"81a56677d4f56bf703eb634eceefdd85ee88e2f07c36fa5f5c0287e347444dab\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:42:28.153326 containerd[1603]: time="2025-08-13T00:42:28.153279704Z" level=info msg="CreateContainer within sandbox \"81a56677d4f56bf703eb634eceefdd85ee88e2f07c36fa5f5c0287e347444dab\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8513ebe5e8310347c682835b804df01dbb928632a50d99589a93f272c4ed05b3\"" Aug 13 00:42:28.154106 containerd[1603]: time="2025-08-13T00:42:28.154057499Z" level=info msg="StartContainer for \"8513ebe5e8310347c682835b804df01dbb928632a50d99589a93f272c4ed05b3\"" Aug 13 00:42:28.225519 containerd[1603]: time="2025-08-13T00:42:28.225477014Z" level=info msg="StartContainer for \"8513ebe5e8310347c682835b804df01dbb928632a50d99589a93f272c4ed05b3\" returns successfully" Aug 13 00:42:28.824467 kubelet[2732]: E0813 00:42:28.824391 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:28.994730 kubelet[2732]: E0813 00:42:28.994646 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.994730 kubelet[2732]: W0813 00:42:28.994706 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.994730 kubelet[2732]: E0813 00:42:28.994742 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.995515 kubelet[2732]: E0813 00:42:28.995340 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.995515 kubelet[2732]: W0813 00:42:28.995364 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.995515 kubelet[2732]: E0813 00:42:28.995389 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.996009 kubelet[2732]: E0813 00:42:28.995990 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.996079 kubelet[2732]: W0813 00:42:28.996009 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.996079 kubelet[2732]: E0813 00:42:28.996025 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.996223 kubelet[2732]: E0813 00:42:28.996206 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.996223 kubelet[2732]: W0813 00:42:28.996217 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.996400 kubelet[2732]: E0813 00:42:28.996227 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.996504 kubelet[2732]: E0813 00:42:28.996491 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.996504 kubelet[2732]: W0813 00:42:28.996505 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.996611 kubelet[2732]: E0813 00:42:28.996515 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.996707 kubelet[2732]: E0813 00:42:28.996693 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.996707 kubelet[2732]: W0813 00:42:28.996705 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.996914 kubelet[2732]: E0813 00:42:28.996715 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.997029 kubelet[2732]: E0813 00:42:28.997014 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.997029 kubelet[2732]: W0813 00:42:28.997027 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.997128 kubelet[2732]: E0813 00:42:28.997038 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.997211 kubelet[2732]: E0813 00:42:28.997197 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.997211 kubelet[2732]: W0813 00:42:28.997207 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.997393 kubelet[2732]: E0813 00:42:28.997216 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.997496 kubelet[2732]: E0813 00:42:28.997481 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.997496 kubelet[2732]: W0813 00:42:28.997494 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.997599 kubelet[2732]: E0813 00:42:28.997504 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.997690 kubelet[2732]: E0813 00:42:28.997660 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.997690 kubelet[2732]: W0813 00:42:28.997687 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.997895 kubelet[2732]: E0813 00:42:28.997697 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.998210 kubelet[2732]: E0813 00:42:28.997984 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.998210 kubelet[2732]: W0813 00:42:28.997996 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.998210 kubelet[2732]: E0813 00:42:28.998007 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.998210 kubelet[2732]: E0813 00:42:28.998151 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.998210 kubelet[2732]: W0813 00:42:28.998159 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.998210 kubelet[2732]: E0813 00:42:28.998167 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.998504 kubelet[2732]: E0813 00:42:28.998482 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.998504 kubelet[2732]: W0813 00:42:28.998497 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.998595 kubelet[2732]: E0813 00:42:28.998507 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.998707 kubelet[2732]: E0813 00:42:28.998684 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.998707 kubelet[2732]: W0813 00:42:28.998698 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.998707 kubelet[2732]: E0813 00:42:28.998707 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:28.998988 kubelet[2732]: E0813 00:42:28.998975 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:28.998988 kubelet[2732]: W0813 00:42:28.998989 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:28.999061 kubelet[2732]: E0813 00:42:28.999000 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.009819 kubelet[2732]: E0813 00:42:29.009615 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.009819 kubelet[2732]: W0813 00:42:29.009650 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.009819 kubelet[2732]: E0813 00:42:29.009706 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.010364 kubelet[2732]: E0813 00:42:29.010287 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.010364 kubelet[2732]: W0813 00:42:29.010308 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.010364 kubelet[2732]: E0813 00:42:29.010348 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.010968 kubelet[2732]: E0813 00:42:29.010857 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.010968 kubelet[2732]: W0813 00:42:29.010920 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.010968 kubelet[2732]: E0813 00:42:29.010943 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.011222 kubelet[2732]: E0813 00:42:29.011179 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.011222 kubelet[2732]: W0813 00:42:29.011195 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.011222 kubelet[2732]: E0813 00:42:29.011219 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.011449 kubelet[2732]: E0813 00:42:29.011431 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.011449 kubelet[2732]: W0813 00:42:29.011447 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.011548 kubelet[2732]: E0813 00:42:29.011470 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.011746 kubelet[2732]: E0813 00:42:29.011723 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.011746 kubelet[2732]: W0813 00:42:29.011737 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.011923 kubelet[2732]: E0813 00:42:29.011758 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.012284 kubelet[2732]: E0813 00:42:29.012239 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.012284 kubelet[2732]: W0813 00:42:29.012258 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.012284 kubelet[2732]: E0813 00:42:29.012274 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.012607 kubelet[2732]: E0813 00:42:29.012547 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.012607 kubelet[2732]: W0813 00:42:29.012563 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.012748 kubelet[2732]: E0813 00:42:29.012647 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.012748 kubelet[2732]: E0813 00:42:29.012840 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.012748 kubelet[2732]: W0813 00:42:29.012851 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.013010 kubelet[2732]: E0813 00:42:29.012968 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.013134 kubelet[2732]: E0813 00:42:29.013106 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.013134 kubelet[2732]: W0813 00:42:29.013122 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.013134 kubelet[2732]: E0813 00:42:29.013136 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.013321 kubelet[2732]: E0813 00:42:29.013304 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.013321 kubelet[2732]: W0813 00:42:29.013315 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.013384 kubelet[2732]: E0813 00:42:29.013334 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.013502 kubelet[2732]: E0813 00:42:29.013484 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.013502 kubelet[2732]: W0813 00:42:29.013495 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.013560 kubelet[2732]: E0813 00:42:29.013511 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.013779 kubelet[2732]: E0813 00:42:29.013758 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.013779 kubelet[2732]: W0813 00:42:29.013772 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.013850 kubelet[2732]: E0813 00:42:29.013791 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.014218 kubelet[2732]: E0813 00:42:29.014189 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.014218 kubelet[2732]: W0813 00:42:29.014207 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.014530 kubelet[2732]: E0813 00:42:29.014368 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.014530 kubelet[2732]: E0813 00:42:29.014377 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.014530 kubelet[2732]: W0813 00:42:29.014422 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.014530 kubelet[2732]: E0813 00:42:29.014435 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.014840 kubelet[2732]: E0813 00:42:29.014717 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.014840 kubelet[2732]: W0813 00:42:29.014729 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.014840 kubelet[2732]: E0813 00:42:29.014740 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.015025 kubelet[2732]: E0813 00:42:29.015013 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.015091 kubelet[2732]: W0813 00:42:29.015080 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.015147 kubelet[2732]: E0813 00:42:29.015137 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.015635 kubelet[2732]: E0813 00:42:29.015619 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:42:29.015789 kubelet[2732]: W0813 00:42:29.015742 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:42:29.015789 kubelet[2732]: E0813 00:42:29.015761 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:42:29.451187 containerd[1603]: time="2025-08-13T00:42:29.451133533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:29.454896 containerd[1603]: time="2025-08-13T00:42:29.452960363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:42:29.454896 containerd[1603]: time="2025-08-13T00:42:29.454024998Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:29.460089 containerd[1603]: time="2025-08-13T00:42:29.460032606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:29.462135 containerd[1603]: time="2025-08-13T00:42:29.460896041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.340815349s" Aug 13 00:42:29.462135 containerd[1603]: time="2025-08-13T00:42:29.461016480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:42:29.465301 containerd[1603]: time="2025-08-13T00:42:29.465236538Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:42:29.488097 containerd[1603]: time="2025-08-13T00:42:29.488018697Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad\"" Aug 13 00:42:29.490272 containerd[1603]: time="2025-08-13T00:42:29.490200085Z" level=info msg="StartContainer for \"c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad\"" Aug 13 00:42:29.565336 containerd[1603]: time="2025-08-13T00:42:29.565162726Z" level=info msg="StartContainer for \"c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad\" returns successfully" Aug 13 00:42:29.622656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad-rootfs.mount: Deactivated successfully. Aug 13 00:42:29.710352 containerd[1603]: time="2025-08-13T00:42:29.710041955Z" level=info msg="shim disconnected" id=c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad namespace=k8s.io Aug 13 00:42:29.710352 containerd[1603]: time="2025-08-13T00:42:29.710243754Z" level=warning msg="cleaning up after shim disconnected" id=c7c5f0bc7d7b4a56f1b6b8cced7b859d57ed16f5483bd2cf5e4c1326e2416dad namespace=k8s.io Aug 13 00:42:29.710352 containerd[1603]: time="2025-08-13T00:42:29.710261713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:42:29.962033 kubelet[2732]: I0813 00:42:29.961142 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:42:29.965439 containerd[1603]: time="2025-08-13T00:42:29.964779758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:42:29.986427 kubelet[2732]: I0813 00:42:29.983567 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5698c6986f-wllld" podStartSLOduration=3.358828979 podStartE2EDuration="5.983546618s" podCreationTimestamp="2025-08-13 00:42:24 +0000 UTC" firstStartedPulling="2025-08-13 00:42:25.494570498 +0000 UTC m=+23.817169569" lastFinishedPulling="2025-08-13 00:42:28.119288097 +0000 UTC m=+26.441887208" observedRunningTime="2025-08-13 00:42:28.972013774 +0000 UTC m=+27.294612885" watchObservedRunningTime="2025-08-13 00:42:29.983546618 +0000 UTC m=+28.306145729" Aug 13 00:42:30.824716 kubelet[2732]: E0813 00:42:30.823921 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:32.307942 containerd[1603]: time="2025-08-13T00:42:32.307819109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.309209 containerd[1603]: time="2025-08-13T00:42:32.309136184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:42:32.310361 containerd[1603]: time="2025-08-13T00:42:32.310291178Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.315023 containerd[1603]: time="2025-08-13T00:42:32.313818123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.315023 containerd[1603]: time="2025-08-13T00:42:32.314861478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.35001656s" Aug 13 00:42:32.315023 containerd[1603]: time="2025-08-13T00:42:32.314918118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:42:32.318446 containerd[1603]: time="2025-08-13T00:42:32.318329623Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:42:32.339337 containerd[1603]: time="2025-08-13T00:42:32.339065012Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182\"" Aug 13 00:42:32.341263 containerd[1603]: time="2025-08-13T00:42:32.340963404Z" level=info msg="StartContainer for \"51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182\"" Aug 13 00:42:32.408842 containerd[1603]: time="2025-08-13T00:42:32.408603467Z" level=info msg="StartContainer for \"51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182\" returns successfully" Aug 13 00:42:32.823441 kubelet[2732]: E0813 00:42:32.823382 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:32.924022 containerd[1603]: time="2025-08-13T00:42:32.923951966Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:42:32.967940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182-rootfs.mount: Deactivated successfully. Aug 13 00:42:32.977059 kubelet[2732]: I0813 00:42:32.976544 2732 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:42:33.045442 containerd[1603]: time="2025-08-13T00:42:33.044920448Z" level=info msg="shim disconnected" id=51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182 namespace=k8s.io Aug 13 00:42:33.045442 containerd[1603]: time="2025-08-13T00:42:33.045246927Z" level=warning msg="cleaning up after shim disconnected" id=51bf58fbd21b0fd24a639239d8993576e7b3cfca343b4f297962ac832d721182 namespace=k8s.io Aug 13 00:42:33.045442 containerd[1603]: time="2025-08-13T00:42:33.045267486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:42:33.049977 kubelet[2732]: I0813 00:42:33.047122 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpnq\" (UniqueName: \"kubernetes.io/projected/0df81e4e-8fb8-429d-9966-a87b9cc013c8-kube-api-access-xcpnq\") pod \"calico-apiserver-6fcb999d87-pw4vs\" (UID: \"0df81e4e-8fb8-429d-9966-a87b9cc013c8\") " pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" Aug 13 00:42:33.049977 kubelet[2732]: I0813 00:42:33.047185 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0df81e4e-8fb8-429d-9966-a87b9cc013c8-calico-apiserver-certs\") pod \"calico-apiserver-6fcb999d87-pw4vs\" (UID: \"0df81e4e-8fb8-429d-9966-a87b9cc013c8\") " pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" Aug 13 00:42:33.148644 kubelet[2732]: I0813 00:42:33.148498 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4jrj\" (UniqueName: \"kubernetes.io/projected/090fd80a-e98c-47af-a53e-06165e3cc066-kube-api-access-w4jrj\") pod \"goldmane-58fd7646b9-844tr\" (UID: \"090fd80a-e98c-47af-a53e-06165e3cc066\") " pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.148644 kubelet[2732]: I0813 00:42:33.148563 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ccf\" (UniqueName: \"kubernetes.io/projected/9c15bfda-2353-4522-94fd-e2dfc420915b-kube-api-access-g5ccf\") pod \"calico-kube-controllers-6bb9ddbc7d-mqfxp\" (UID: \"9c15bfda-2353-4522-94fd-e2dfc420915b\") " pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" Aug 13 00:42:33.148644 kubelet[2732]: I0813 00:42:33.148596 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b573c1-fa3c-4342-84c0-9c27bccb5bed-config-volume\") pod \"coredns-7c65d6cfc9-hcds2\" (UID: \"09b573c1-fa3c-4342-84c0-9c27bccb5bed\") " pod="kube-system/coredns-7c65d6cfc9-hcds2" Aug 13 00:42:33.148807 kubelet[2732]: I0813 00:42:33.148648 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hnhf\" (UniqueName: \"kubernetes.io/projected/09b573c1-fa3c-4342-84c0-9c27bccb5bed-kube-api-access-9hnhf\") pod \"coredns-7c65d6cfc9-hcds2\" (UID: \"09b573c1-fa3c-4342-84c0-9c27bccb5bed\") " pod="kube-system/coredns-7c65d6cfc9-hcds2" Aug 13 00:42:33.148807 kubelet[2732]: I0813 00:42:33.148684 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40e29be8-46ac-4faf-8185-7148a795d441-config-volume\") pod \"coredns-7c65d6cfc9-dm6pb\" (UID: \"40e29be8-46ac-4faf-8185-7148a795d441\") " pod="kube-system/coredns-7c65d6cfc9-dm6pb" Aug 13 00:42:33.148807 kubelet[2732]: I0813 00:42:33.148711 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-backend-key-pair\") pod \"whisker-697d76b-vp9n6\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " pod="calico-system/whisker-697d76b-vp9n6" Aug 13 00:42:33.148807 kubelet[2732]: I0813 00:42:33.148735 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7p7v\" (UniqueName: \"kubernetes.io/projected/40e29be8-46ac-4faf-8185-7148a795d441-kube-api-access-q7p7v\") pod \"coredns-7c65d6cfc9-dm6pb\" (UID: \"40e29be8-46ac-4faf-8185-7148a795d441\") " pod="kube-system/coredns-7c65d6cfc9-dm6pb" Aug 13 00:42:33.148807 kubelet[2732]: I0813 00:42:33.148781 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7j9f\" (UniqueName: \"kubernetes.io/projected/1beb254b-638d-4817-98ac-f5a8ad60ec6e-kube-api-access-r7j9f\") pod \"calico-apiserver-6fcb999d87-6t872\" (UID: \"1beb254b-638d-4817-98ac-f5a8ad60ec6e\") " pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" Aug 13 00:42:33.148981 kubelet[2732]: I0813 00:42:33.148808 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fd80a-e98c-47af-a53e-06165e3cc066-config\") pod \"goldmane-58fd7646b9-844tr\" (UID: \"090fd80a-e98c-47af-a53e-06165e3cc066\") " pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.148981 kubelet[2732]: I0813 00:42:33.148831 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/090fd80a-e98c-47af-a53e-06165e3cc066-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-844tr\" (UID: \"090fd80a-e98c-47af-a53e-06165e3cc066\") " pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.148981 kubelet[2732]: I0813 00:42:33.148854 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2g6v\" (UniqueName: \"kubernetes.io/projected/d0579e22-efaf-495e-9514-3069b4f2d6be-kube-api-access-p2g6v\") pod \"whisker-697d76b-vp9n6\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " pod="calico-system/whisker-697d76b-vp9n6" Aug 13 00:42:33.148981 kubelet[2732]: I0813 00:42:33.148972 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c15bfda-2353-4522-94fd-e2dfc420915b-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb9ddbc7d-mqfxp\" (UID: \"9c15bfda-2353-4522-94fd-e2dfc420915b\") " pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" Aug 13 00:42:33.149083 kubelet[2732]: I0813 00:42:33.149012 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1beb254b-638d-4817-98ac-f5a8ad60ec6e-calico-apiserver-certs\") pod \"calico-apiserver-6fcb999d87-6t872\" (UID: \"1beb254b-638d-4817-98ac-f5a8ad60ec6e\") " pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" Aug 13 00:42:33.149083 kubelet[2732]: I0813 00:42:33.149037 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/090fd80a-e98c-47af-a53e-06165e3cc066-goldmane-key-pair\") pod \"goldmane-58fd7646b9-844tr\" (UID: \"090fd80a-e98c-47af-a53e-06165e3cc066\") " pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.149134 kubelet[2732]: I0813 00:42:33.149108 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-ca-bundle\") pod \"whisker-697d76b-vp9n6\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " pod="calico-system/whisker-697d76b-vp9n6" Aug 13 00:42:33.372082 containerd[1603]: time="2025-08-13T00:42:33.371912543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-6t872,Uid:1beb254b-638d-4817-98ac-f5a8ad60ec6e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:42:33.374117 containerd[1603]: time="2025-08-13T00:42:33.373806695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hcds2,Uid:09b573c1-fa3c-4342-84c0-9c27bccb5bed,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:33.375087 containerd[1603]: time="2025-08-13T00:42:33.374968771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-pw4vs,Uid:0df81e4e-8fb8-429d-9966-a87b9cc013c8,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:42:33.381145 containerd[1603]: time="2025-08-13T00:42:33.381088705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb9ddbc7d-mqfxp,Uid:9c15bfda-2353-4522-94fd-e2dfc420915b,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:33.387378 containerd[1603]: time="2025-08-13T00:42:33.387330960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697d76b-vp9n6,Uid:d0579e22-efaf-495e-9514-3069b4f2d6be,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:33.388759 containerd[1603]: time="2025-08-13T00:42:33.388616954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-844tr,Uid:090fd80a-e98c-47af-a53e-06165e3cc066,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:33.389694 containerd[1603]: time="2025-08-13T00:42:33.389587510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dm6pb,Uid:40e29be8-46ac-4faf-8185-7148a795d441,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:33.581823 containerd[1603]: time="2025-08-13T00:42:33.581750160Z" level=error msg="Failed to destroy network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.582770 containerd[1603]: time="2025-08-13T00:42:33.582673876Z" level=error msg="encountered an error cleaning up failed sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.583159 containerd[1603]: time="2025-08-13T00:42:33.582836716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-6t872,Uid:1beb254b-638d-4817-98ac-f5a8ad60ec6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.583803 kubelet[2732]: E0813 00:42:33.583277 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.583803 kubelet[2732]: E0813 00:42:33.583354 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" Aug 13 00:42:33.583803 kubelet[2732]: E0813 00:42:33.583372 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" Aug 13 00:42:33.584101 kubelet[2732]: E0813 00:42:33.583415 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcb999d87-6t872_calico-apiserver(1beb254b-638d-4817-98ac-f5a8ad60ec6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcb999d87-6t872_calico-apiserver(1beb254b-638d-4817-98ac-f5a8ad60ec6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" podUID="1beb254b-638d-4817-98ac-f5a8ad60ec6e" Aug 13 00:42:33.618761 containerd[1603]: time="2025-08-13T00:42:33.618707088Z" level=error msg="Failed to destroy network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.620817 containerd[1603]: time="2025-08-13T00:42:33.620762520Z" level=error msg="encountered an error cleaning up failed sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.621024 containerd[1603]: time="2025-08-13T00:42:33.620936239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hcds2,Uid:09b573c1-fa3c-4342-84c0-9c27bccb5bed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.622251 kubelet[2732]: E0813 00:42:33.622144 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.622528 kubelet[2732]: E0813 00:42:33.622329 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hcds2" Aug 13 00:42:33.622528 kubelet[2732]: E0813 00:42:33.622357 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hcds2" Aug 13 00:42:33.622846 kubelet[2732]: E0813 00:42:33.622648 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hcds2_kube-system(09b573c1-fa3c-4342-84c0-9c27bccb5bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hcds2_kube-system(09b573c1-fa3c-4342-84c0-9c27bccb5bed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hcds2" podUID="09b573c1-fa3c-4342-84c0-9c27bccb5bed" Aug 13 00:42:33.655897 containerd[1603]: time="2025-08-13T00:42:33.655312738Z" level=error msg="Failed to destroy network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.655897 containerd[1603]: time="2025-08-13T00:42:33.655761136Z" level=error msg="encountered an error cleaning up failed sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.655897 containerd[1603]: time="2025-08-13T00:42:33.655811216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb9ddbc7d-mqfxp,Uid:9c15bfda-2353-4522-94fd-e2dfc420915b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.656157 kubelet[2732]: E0813 00:42:33.656119 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.656213 kubelet[2732]: E0813 00:42:33.656175 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" Aug 13 00:42:33.656213 kubelet[2732]: E0813 00:42:33.656195 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" Aug 13 00:42:33.656263 kubelet[2732]: E0813 00:42:33.656237 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb9ddbc7d-mqfxp_calico-system(9c15bfda-2353-4522-94fd-e2dfc420915b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb9ddbc7d-mqfxp_calico-system(9c15bfda-2353-4522-94fd-e2dfc420915b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" podUID="9c15bfda-2353-4522-94fd-e2dfc420915b" Aug 13 00:42:33.658580 containerd[1603]: time="2025-08-13T00:42:33.658468885Z" level=error msg="Failed to destroy network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.658807 containerd[1603]: time="2025-08-13T00:42:33.658777843Z" level=error msg="encountered an error cleaning up failed sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.658844 containerd[1603]: time="2025-08-13T00:42:33.658827643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697d76b-vp9n6,Uid:d0579e22-efaf-495e-9514-3069b4f2d6be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.659248 kubelet[2732]: E0813 00:42:33.659034 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.659248 kubelet[2732]: E0813 00:42:33.659083 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-697d76b-vp9n6" Aug 13 00:42:33.659248 kubelet[2732]: E0813 00:42:33.659101 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-697d76b-vp9n6" Aug 13 00:42:33.659568 kubelet[2732]: E0813 00:42:33.659186 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-697d76b-vp9n6_calico-system(d0579e22-efaf-495e-9514-3069b4f2d6be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-697d76b-vp9n6_calico-system(d0579e22-efaf-495e-9514-3069b4f2d6be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-697d76b-vp9n6" podUID="d0579e22-efaf-495e-9514-3069b4f2d6be" Aug 13 00:42:33.668038 containerd[1603]: time="2025-08-13T00:42:33.667797046Z" level=error msg="Failed to destroy network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.668208 containerd[1603]: time="2025-08-13T00:42:33.668108045Z" level=error msg="encountered an error cleaning up failed sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.668208 containerd[1603]: time="2025-08-13T00:42:33.668162605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-pw4vs,Uid:0df81e4e-8fb8-429d-9966-a87b9cc013c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.668564 kubelet[2732]: E0813 00:42:33.668348 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.668773 kubelet[2732]: E0813 00:42:33.668396 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" Aug 13 00:42:33.668773 kubelet[2732]: E0813 00:42:33.668692 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" Aug 13 00:42:33.669169 kubelet[2732]: E0813 00:42:33.668926 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcb999d87-pw4vs_calico-apiserver(0df81e4e-8fb8-429d-9966-a87b9cc013c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcb999d87-pw4vs_calico-apiserver(0df81e4e-8fb8-429d-9966-a87b9cc013c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" podUID="0df81e4e-8fb8-429d-9966-a87b9cc013c8" Aug 13 00:42:33.672817 containerd[1603]: time="2025-08-13T00:42:33.672203268Z" level=error msg="Failed to destroy network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.673559 containerd[1603]: time="2025-08-13T00:42:33.673422383Z" level=error msg="encountered an error cleaning up failed sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.673637 containerd[1603]: time="2025-08-13T00:42:33.673541343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dm6pb,Uid:40e29be8-46ac-4faf-8185-7148a795d441,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.673924 kubelet[2732]: E0813 00:42:33.673841 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.673985 kubelet[2732]: E0813 00:42:33.673948 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dm6pb" Aug 13 00:42:33.674116 kubelet[2732]: E0813 00:42:33.673990 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dm6pb" Aug 13 00:42:33.674116 kubelet[2732]: E0813 00:42:33.674058 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dm6pb_kube-system(40e29be8-46ac-4faf-8185-7148a795d441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dm6pb_kube-system(40e29be8-46ac-4faf-8185-7148a795d441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dm6pb" podUID="40e29be8-46ac-4faf-8185-7148a795d441" Aug 13 00:42:33.678561 containerd[1603]: time="2025-08-13T00:42:33.678352363Z" level=error msg="Failed to destroy network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.679087 containerd[1603]: time="2025-08-13T00:42:33.678931600Z" level=error msg="encountered an error cleaning up failed sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.679087 containerd[1603]: time="2025-08-13T00:42:33.678990640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-844tr,Uid:090fd80a-e98c-47af-a53e-06165e3cc066,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.679584 kubelet[2732]: E0813 00:42:33.679395 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:33.679584 kubelet[2732]: E0813 00:42:33.679445 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.679584 kubelet[2732]: E0813 00:42:33.679468 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-844tr" Aug 13 00:42:33.680010 kubelet[2732]: E0813 00:42:33.679521 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-844tr_calico-system(090fd80a-e98c-47af-a53e-06165e3cc066)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-844tr_calico-system(090fd80a-e98c-47af-a53e-06165e3cc066)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-844tr" podUID="090fd80a-e98c-47af-a53e-06165e3cc066" Aug 13 00:42:33.982253 kubelet[2732]: I0813 00:42:33.982166 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:33.983573 containerd[1603]: time="2025-08-13T00:42:33.983524508Z" level=info msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" Aug 13 00:42:33.983794 containerd[1603]: time="2025-08-13T00:42:33.983760907Z" level=info msg="Ensure that sandbox 60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24 in task-service has been cleanup successfully" Aug 13 00:42:33.985185 kubelet[2732]: I0813 00:42:33.984780 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:33.985491 containerd[1603]: time="2025-08-13T00:42:33.985388060Z" level=info msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" Aug 13 00:42:33.985843 containerd[1603]: time="2025-08-13T00:42:33.985818658Z" level=info msg="Ensure that sandbox 201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532 in task-service has been cleanup successfully" Aug 13 00:42:33.993705 kubelet[2732]: I0813 00:42:33.993263 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:33.995494 containerd[1603]: time="2025-08-13T00:42:33.995013660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:42:33.998748 containerd[1603]: time="2025-08-13T00:42:33.997066532Z" level=info msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" Aug 13 00:42:33.999976 containerd[1603]: time="2025-08-13T00:42:33.999848081Z" level=info msg="Ensure that sandbox b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717 in task-service has been cleanup successfully" Aug 13 00:42:34.011036 kubelet[2732]: I0813 00:42:34.011008 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:34.013101 containerd[1603]: time="2025-08-13T00:42:34.013059589Z" level=info msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" Aug 13 00:42:34.013911 containerd[1603]: time="2025-08-13T00:42:34.013230109Z" level=info msg="Ensure that sandbox d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95 in task-service has been cleanup successfully" Aug 13 00:42:34.019997 kubelet[2732]: I0813 00:42:34.019972 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:34.022716 containerd[1603]: time="2025-08-13T00:42:34.022681952Z" level=info msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" Aug 13 00:42:34.023912 containerd[1603]: time="2025-08-13T00:42:34.023738908Z" level=info msg="Ensure that sandbox 4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5 in task-service has been cleanup successfully" Aug 13 00:42:34.026978 kubelet[2732]: I0813 00:42:34.026944 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:34.029109 containerd[1603]: time="2025-08-13T00:42:34.028824969Z" level=info msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" Aug 13 00:42:34.029109 containerd[1603]: time="2025-08-13T00:42:34.029039008Z" level=info msg="Ensure that sandbox 06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619 in task-service has been cleanup successfully" Aug 13 00:42:34.040503 kubelet[2732]: I0813 00:42:34.040473 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:34.048738 containerd[1603]: time="2025-08-13T00:42:34.048634252Z" level=info msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" Aug 13 00:42:34.048837 containerd[1603]: time="2025-08-13T00:42:34.048812052Z" level=info msg="Ensure that sandbox 3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179 in task-service has been cleanup successfully" Aug 13 00:42:34.081965 containerd[1603]: time="2025-08-13T00:42:34.081052127Z" level=error msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" failed" error="failed to destroy network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.082097 kubelet[2732]: E0813 00:42:34.081289 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:34.082097 kubelet[2732]: E0813 00:42:34.081346 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532"} Aug 13 00:42:34.082097 kubelet[2732]: E0813 00:42:34.081409 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1beb254b-638d-4817-98ac-f5a8ad60ec6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.082097 kubelet[2732]: E0813 00:42:34.081430 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1beb254b-638d-4817-98ac-f5a8ad60ec6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" podUID="1beb254b-638d-4817-98ac-f5a8ad60ec6e" Aug 13 00:42:34.108154 containerd[1603]: time="2025-08-13T00:42:34.108102903Z" level=error msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" failed" error="failed to destroy network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.108820 kubelet[2732]: E0813 00:42:34.108608 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:34.108820 kubelet[2732]: E0813 00:42:34.108701 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24"} Aug 13 00:42:34.108820 kubelet[2732]: E0813 00:42:34.108735 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"090fd80a-e98c-47af-a53e-06165e3cc066\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.108820 kubelet[2732]: E0813 00:42:34.108784 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090fd80a-e98c-47af-a53e-06165e3cc066\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-844tr" podUID="090fd80a-e98c-47af-a53e-06165e3cc066" Aug 13 00:42:34.116267 containerd[1603]: time="2025-08-13T00:42:34.116224192Z" level=error msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" failed" error="failed to destroy network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.116781 kubelet[2732]: E0813 00:42:34.116735 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:34.117175 kubelet[2732]: E0813 00:42:34.116929 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717"} Aug 13 00:42:34.117175 kubelet[2732]: E0813 00:42:34.116974 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c15bfda-2353-4522-94fd-e2dfc420915b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.117175 kubelet[2732]: E0813 00:42:34.116996 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c15bfda-2353-4522-94fd-e2dfc420915b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" podUID="9c15bfda-2353-4522-94fd-e2dfc420915b" Aug 13 00:42:34.122074 containerd[1603]: time="2025-08-13T00:42:34.121941410Z" level=error msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" failed" error="failed to destroy network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.122173 kubelet[2732]: E0813 00:42:34.122145 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:34.122212 kubelet[2732]: E0813 00:42:34.122189 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619"} Aug 13 00:42:34.122237 kubelet[2732]: E0813 00:42:34.122222 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09b573c1-fa3c-4342-84c0-9c27bccb5bed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.122341 kubelet[2732]: E0813 00:42:34.122242 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09b573c1-fa3c-4342-84c0-9c27bccb5bed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hcds2" podUID="09b573c1-fa3c-4342-84c0-9c27bccb5bed" Aug 13 00:42:34.126366 containerd[1603]: time="2025-08-13T00:42:34.126144113Z" level=error msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" failed" error="failed to destroy network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.126452 kubelet[2732]: E0813 00:42:34.126398 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:34.126485 kubelet[2732]: E0813 00:42:34.126449 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5"} Aug 13 00:42:34.126510 kubelet[2732]: E0813 00:42:34.126486 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df81e4e-8fb8-429d-9966-a87b9cc013c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.126567 kubelet[2732]: E0813 00:42:34.126516 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df81e4e-8fb8-429d-9966-a87b9cc013c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" podUID="0df81e4e-8fb8-429d-9966-a87b9cc013c8" Aug 13 00:42:34.127484 containerd[1603]: time="2025-08-13T00:42:34.127245989Z" level=error msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" failed" error="failed to destroy network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.127559 kubelet[2732]: E0813 00:42:34.127460 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:34.127559 kubelet[2732]: E0813 00:42:34.127527 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95"} Aug 13 00:42:34.127634 kubelet[2732]: E0813 00:42:34.127562 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0579e22-efaf-495e-9514-3069b4f2d6be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.127690 kubelet[2732]: E0813 00:42:34.127603 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0579e22-efaf-495e-9514-3069b4f2d6be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-697d76b-vp9n6" podUID="d0579e22-efaf-495e-9514-3069b4f2d6be" Aug 13 00:42:34.143378 containerd[1603]: time="2025-08-13T00:42:34.143306967Z" level=error msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" failed" error="failed to destroy network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.143678 kubelet[2732]: E0813 00:42:34.143590 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:34.143742 kubelet[2732]: E0813 00:42:34.143696 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179"} Aug 13 00:42:34.143771 kubelet[2732]: E0813 00:42:34.143747 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40e29be8-46ac-4faf-8185-7148a795d441\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:34.143820 kubelet[2732]: E0813 00:42:34.143785 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40e29be8-46ac-4faf-8185-7148a795d441\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dm6pb" podUID="40e29be8-46ac-4faf-8185-7148a795d441" Aug 13 00:42:34.340756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5-shm.mount: Deactivated successfully. Aug 13 00:42:34.341039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619-shm.mount: Deactivated successfully. Aug 13 00:42:34.341213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532-shm.mount: Deactivated successfully. Aug 13 00:42:34.828179 containerd[1603]: time="2025-08-13T00:42:34.827599169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cldrb,Uid:4817502a-aff6-4c70-b804-8c5d92350237,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:34.907193 containerd[1603]: time="2025-08-13T00:42:34.906821463Z" level=error msg="Failed to destroy network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.909726 containerd[1603]: time="2025-08-13T00:42:34.909297614Z" level=error msg="encountered an error cleaning up failed sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.909726 containerd[1603]: time="2025-08-13T00:42:34.909399494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cldrb,Uid:4817502a-aff6-4c70-b804-8c5d92350237,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.912905 kubelet[2732]: E0813 00:42:34.910399 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:34.912905 kubelet[2732]: E0813 00:42:34.910482 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:34.912905 kubelet[2732]: E0813 00:42:34.910511 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cldrb" Aug 13 00:42:34.913136 kubelet[2732]: E0813 00:42:34.910648 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cldrb_calico-system(4817502a-aff6-4c70-b804-8c5d92350237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cldrb_calico-system(4817502a-aff6-4c70-b804-8c5d92350237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:34.914546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044-shm.mount: Deactivated successfully. Aug 13 00:42:35.045201 kubelet[2732]: I0813 00:42:35.045148 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:35.047340 containerd[1603]: time="2025-08-13T00:42:35.046711095Z" level=info msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" Aug 13 00:42:35.047500 containerd[1603]: time="2025-08-13T00:42:35.047291933Z" level=info msg="Ensure that sandbox 1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044 in task-service has been cleanup successfully" Aug 13 00:42:35.073970 containerd[1603]: time="2025-08-13T00:42:35.073911676Z" level=error msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" failed" error="failed to destroy network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:42:35.074232 kubelet[2732]: E0813 00:42:35.074174 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:35.074288 kubelet[2732]: E0813 00:42:35.074241 2732 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044"} Aug 13 00:42:35.074319 kubelet[2732]: E0813 00:42:35.074289 2732 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4817502a-aff6-4c70-b804-8c5d92350237\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:42:35.074377 kubelet[2732]: E0813 00:42:35.074322 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4817502a-aff6-4c70-b804-8c5d92350237\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cldrb" podUID="4817502a-aff6-4c70-b804-8c5d92350237" Aug 13 00:42:38.495959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040769633.mount: Deactivated successfully. Aug 13 00:42:38.523932 containerd[1603]: time="2025-08-13T00:42:38.523749163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:38.525526 containerd[1603]: time="2025-08-13T00:42:38.525448798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:42:38.526253 containerd[1603]: time="2025-08-13T00:42:38.526162396Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:38.529430 containerd[1603]: time="2025-08-13T00:42:38.529360226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:38.530309 containerd[1603]: time="2025-08-13T00:42:38.530152584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.535100044s" Aug 13 00:42:38.530309 containerd[1603]: time="2025-08-13T00:42:38.530192904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:42:38.542205 containerd[1603]: time="2025-08-13T00:42:38.542135468Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:42:38.567220 containerd[1603]: time="2025-08-13T00:42:38.567087434Z" level=info msg="CreateContainer within sandbox \"b7c0cd0278bbd3284585ee5337e2d81a3c67c51d00b2566dc518cda481e1ab78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cd2baa22241ad62b868b5eaf262e99c5ba002e4596950fc71af772a10b11f3ae\"" Aug 13 00:42:38.569550 containerd[1603]: time="2025-08-13T00:42:38.569450507Z" level=info msg="StartContainer for \"cd2baa22241ad62b868b5eaf262e99c5ba002e4596950fc71af772a10b11f3ae\"" Aug 13 00:42:38.640980 containerd[1603]: time="2025-08-13T00:42:38.640320536Z" level=info msg="StartContainer for \"cd2baa22241ad62b868b5eaf262e99c5ba002e4596950fc71af772a10b11f3ae\" returns successfully" Aug 13 00:42:38.795547 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:42:38.795721 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:42:38.945242 containerd[1603]: time="2025-08-13T00:42:38.943544513Z" level=info msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" Aug 13 00:42:39.102247 kubelet[2732]: I0813 00:42:39.101671 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k4bq9" podStartSLOduration=1.4185730570000001 podStartE2EDuration="14.101652901s" podCreationTimestamp="2025-08-13 00:42:25 +0000 UTC" firstStartedPulling="2025-08-13 00:42:25.849323813 +0000 UTC m=+24.171922924" lastFinishedPulling="2025-08-13 00:42:38.532403657 +0000 UTC m=+36.855002768" observedRunningTime="2025-08-13 00:42:39.093334404 +0000 UTC m=+37.415933515" watchObservedRunningTime="2025-08-13 00:42:39.101652901 +0000 UTC m=+37.424252012" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.097 [INFO][3904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.097 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" iface="eth0" netns="/var/run/netns/cni-08705d0a-b144-4c89-e319-71533dc07d98" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.100 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" iface="eth0" netns="/var/run/netns/cni-08705d0a-b144-4c89-e319-71533dc07d98" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.100 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" iface="eth0" netns="/var/run/netns/cni-08705d0a-b144-4c89-e319-71533dc07d98" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.100 [INFO][3904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.100 [INFO][3904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.193 [INFO][3919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.193 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.193 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.207 [WARNING][3919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.207 [INFO][3919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.209 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:39.219947 containerd[1603]: 2025-08-13 00:42:39.215 [INFO][3904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:42:39.219947 containerd[1603]: time="2025-08-13T00:42:39.218351655Z" level=info msg="TearDown network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" successfully" Aug 13 00:42:39.219947 containerd[1603]: time="2025-08-13T00:42:39.218378895Z" level=info msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" returns successfully" Aug 13 00:42:39.297783 kubelet[2732]: I0813 00:42:39.297728 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2g6v\" (UniqueName: \"kubernetes.io/projected/d0579e22-efaf-495e-9514-3069b4f2d6be-kube-api-access-p2g6v\") pod \"d0579e22-efaf-495e-9514-3069b4f2d6be\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " Aug 13 00:42:39.299158 kubelet[2732]: I0813 00:42:39.298005 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-backend-key-pair\") pod \"d0579e22-efaf-495e-9514-3069b4f2d6be\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " Aug 13 00:42:39.299158 kubelet[2732]: I0813 00:42:39.298058 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-ca-bundle\") pod \"d0579e22-efaf-495e-9514-3069b4f2d6be\" (UID: \"d0579e22-efaf-495e-9514-3069b4f2d6be\") " Aug 13 00:42:39.299158 kubelet[2732]: I0813 00:42:39.298772 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d0579e22-efaf-495e-9514-3069b4f2d6be" (UID: "d0579e22-efaf-495e-9514-3069b4f2d6be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:42:39.301991 kubelet[2732]: I0813 00:42:39.301948 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0579e22-efaf-495e-9514-3069b4f2d6be-kube-api-access-p2g6v" (OuterVolumeSpecName: "kube-api-access-p2g6v") pod "d0579e22-efaf-495e-9514-3069b4f2d6be" (UID: "d0579e22-efaf-495e-9514-3069b4f2d6be"). InnerVolumeSpecName "kube-api-access-p2g6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:42:39.305076 kubelet[2732]: I0813 00:42:39.305032 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d0579e22-efaf-495e-9514-3069b4f2d6be" (UID: "d0579e22-efaf-495e-9514-3069b4f2d6be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:42:39.398774 kubelet[2732]: I0813 00:42:39.398654 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-backend-key-pair\") on node \"ci-4081-3-5-c-674096e178\" DevicePath \"\"" Aug 13 00:42:39.399083 kubelet[2732]: I0813 00:42:39.399055 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0579e22-efaf-495e-9514-3069b4f2d6be-whisker-ca-bundle\") on node \"ci-4081-3-5-c-674096e178\" DevicePath \"\"" Aug 13 00:42:39.399208 kubelet[2732]: I0813 00:42:39.399187 2732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2g6v\" (UniqueName: \"kubernetes.io/projected/d0579e22-efaf-495e-9514-3069b4f2d6be-kube-api-access-p2g6v\") on node \"ci-4081-3-5-c-674096e178\" DevicePath \"\"" Aug 13 00:42:39.496817 systemd[1]: run-netns-cni\x2d08705d0a\x2db144\x2d4c89\x2de319\x2d71533dc07d98.mount: Deactivated successfully. Aug 13 00:42:39.497000 systemd[1]: var-lib-kubelet-pods-d0579e22\x2defaf\x2d495e\x2d9514\x2d3069b4f2d6be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2g6v.mount: Deactivated successfully. Aug 13 00:42:39.497094 systemd[1]: var-lib-kubelet-pods-d0579e22\x2defaf\x2d495e\x2d9514\x2d3069b4f2d6be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:42:40.206807 kubelet[2732]: I0813 00:42:40.206745 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ee28d75-3578-4bcb-ae89-fc03cb100440-whisker-ca-bundle\") pod \"whisker-558889c9d-bp9ks\" (UID: \"4ee28d75-3578-4bcb-ae89-fc03cb100440\") " pod="calico-system/whisker-558889c9d-bp9ks" Aug 13 00:42:40.207510 kubelet[2732]: I0813 00:42:40.206856 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ee28d75-3578-4bcb-ae89-fc03cb100440-whisker-backend-key-pair\") pod \"whisker-558889c9d-bp9ks\" (UID: \"4ee28d75-3578-4bcb-ae89-fc03cb100440\") " pod="calico-system/whisker-558889c9d-bp9ks" Aug 13 00:42:40.207510 kubelet[2732]: I0813 00:42:40.206974 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4s9p\" (UniqueName: \"kubernetes.io/projected/4ee28d75-3578-4bcb-ae89-fc03cb100440-kube-api-access-b4s9p\") pod \"whisker-558889c9d-bp9ks\" (UID: \"4ee28d75-3578-4bcb-ae89-fc03cb100440\") " pod="calico-system/whisker-558889c9d-bp9ks" Aug 13 00:42:40.450798 containerd[1603]: time="2025-08-13T00:42:40.450399493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558889c9d-bp9ks,Uid:4ee28d75-3578-4bcb-ae89-fc03cb100440,Namespace:calico-system,Attempt:0,}" Aug 13 00:42:40.676835 systemd-networkd[1238]: cali8dee99c2434: Link UP Aug 13 00:42:40.681124 systemd-networkd[1238]: cali8dee99c2434: Gained carrier Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.514 [INFO][4075] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.537 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0 whisker-558889c9d- calico-system 4ee28d75-3578-4bcb-ae89-fc03cb100440 875 0 2025-08-13 00:42:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:558889c9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 whisker-558889c9d-bp9ks eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8dee99c2434 [] [] }} ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.537 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.583 [INFO][4088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" HandleID="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.584 [INFO][4088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" HandleID="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002736e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-c-674096e178", "pod":"whisker-558889c9d-bp9ks", "timestamp":"2025-08-13 00:42:40.583915263 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.584 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.584 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.584 [INFO][4088] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.596 [INFO][4088] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.602 [INFO][4088] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.608 [INFO][4088] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.613 [INFO][4088] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.621 [INFO][4088] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.621 [INFO][4088] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.623 [INFO][4088] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200 Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.628 [INFO][4088] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.646 [INFO][4088] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.193/26] block=192.168.125.192/26 handle="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.646 [INFO][4088] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.193/26] handle="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.646 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:40.721300 containerd[1603]: 2025-08-13 00:42:40.646 [INFO][4088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.193/26] IPv6=[] ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" HandleID="k8s-pod-network.453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.655 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0", GenerateName:"whisker-558889c9d-", Namespace:"calico-system", SelfLink:"", UID:"4ee28d75-3578-4bcb-ae89-fc03cb100440", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558889c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"whisker-558889c9d-bp9ks", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8dee99c2434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.655 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.193/32] ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.655 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8dee99c2434 ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.685 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.686 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0", GenerateName:"whisker-558889c9d-", Namespace:"calico-system", SelfLink:"", UID:"4ee28d75-3578-4bcb-ae89-fc03cb100440", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558889c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200", Pod:"whisker-558889c9d-bp9ks", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8dee99c2434", MAC:"8a:dc:57:bc:a4:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:40.723516 containerd[1603]: 2025-08-13 00:42:40.709 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200" Namespace="calico-system" Pod="whisker-558889c9d-bp9ks" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--558889c9d--bp9ks-eth0" Aug 13 00:42:40.748030 containerd[1603]: time="2025-08-13T00:42:40.747758114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:40.748030 containerd[1603]: time="2025-08-13T00:42:40.747856754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:40.748030 containerd[1603]: time="2025-08-13T00:42:40.747912074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:40.749403 containerd[1603]: time="2025-08-13T00:42:40.749214351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:40.810509 containerd[1603]: time="2025-08-13T00:42:40.810443430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558889c9d-bp9ks,Uid:4ee28d75-3578-4bcb-ae89-fc03cb100440,Namespace:calico-system,Attempt:0,} returns sandbox id \"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200\"" Aug 13 00:42:40.813955 containerd[1603]: time="2025-08-13T00:42:40.813199543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:42:41.828714 kubelet[2732]: I0813 00:42:41.828649 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0579e22-efaf-495e-9514-3069b4f2d6be" path="/var/lib/kubelet/pods/d0579e22-efaf-495e-9514-3069b4f2d6be/volumes" Aug 13 00:42:42.252402 kubelet[2732]: I0813 00:42:42.251966 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:42:42.284595 systemd-networkd[1238]: cali8dee99c2434: Gained IPv6LL Aug 13 00:42:42.393356 containerd[1603]: time="2025-08-13T00:42:42.393273655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:42.395606 containerd[1603]: time="2025-08-13T00:42:42.395197410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:42:42.396957 containerd[1603]: time="2025-08-13T00:42:42.396924126Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:42.400961 containerd[1603]: time="2025-08-13T00:42:42.400195119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:42.401314 containerd[1603]: time="2025-08-13T00:42:42.401283396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.587235095s" Aug 13 00:42:42.401406 containerd[1603]: time="2025-08-13T00:42:42.401390916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:42:42.405859 containerd[1603]: time="2025-08-13T00:42:42.405337027Z" level=info msg="CreateContainer within sandbox \"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:42:42.430694 containerd[1603]: time="2025-08-13T00:42:42.430609369Z" level=info msg="CreateContainer within sandbox \"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"59fb9f7da5f93e7b0a42af145b970a917dfa1168b2f0f3d2ec32d846e395651b\"" Aug 13 00:42:42.431853 containerd[1603]: time="2025-08-13T00:42:42.431526127Z" level=info msg="StartContainer for \"59fb9f7da5f93e7b0a42af145b970a917dfa1168b2f0f3d2ec32d846e395651b\"" Aug 13 00:42:42.505179 containerd[1603]: time="2025-08-13T00:42:42.504835038Z" level=info msg="StartContainer for \"59fb9f7da5f93e7b0a42af145b970a917dfa1168b2f0f3d2ec32d846e395651b\" returns successfully" Aug 13 00:42:42.510260 containerd[1603]: time="2025-08-13T00:42:42.509856586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:42:42.730973 kernel: bpftool[4232]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:42:43.011298 systemd-networkd[1238]: vxlan.calico: Link UP Aug 13 00:42:43.011307 systemd-networkd[1238]: vxlan.calico: Gained carrier Aug 13 00:42:44.589005 systemd-networkd[1238]: vxlan.calico: Gained IPv6LL Aug 13 00:42:45.085009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678524573.mount: Deactivated successfully. Aug 13 00:42:45.104474 containerd[1603]: time="2025-08-13T00:42:45.102930564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:45.104474 containerd[1603]: time="2025-08-13T00:42:45.104416482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:42:45.105407 containerd[1603]: time="2025-08-13T00:42:45.105321480Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:45.108474 containerd[1603]: time="2025-08-13T00:42:45.108412594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:45.109790 containerd[1603]: time="2025-08-13T00:42:45.109748551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.599494205s" Aug 13 00:42:45.109995 containerd[1603]: time="2025-08-13T00:42:45.109970751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:42:45.136011 containerd[1603]: time="2025-08-13T00:42:45.135949662Z" level=info msg="CreateContainer within sandbox \"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:42:45.152722 containerd[1603]: time="2025-08-13T00:42:45.152460190Z" level=info msg="CreateContainer within sandbox \"453a83cdfd5fd900f6779baad13e495aa7d2b86f57b897cb89a7b926c8d5f200\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"de2edad54a09af37a7f7aa531ff3dcd2b6f01127a8426b00496889c365eb8c6c\"" Aug 13 00:42:45.153312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324854902.mount: Deactivated successfully. Aug 13 00:42:45.155902 containerd[1603]: time="2025-08-13T00:42:45.155714264Z" level=info msg="StartContainer for \"de2edad54a09af37a7f7aa531ff3dcd2b6f01127a8426b00496889c365eb8c6c\"" Aug 13 00:42:45.225400 containerd[1603]: time="2025-08-13T00:42:45.225339172Z" level=info msg="StartContainer for \"de2edad54a09af37a7f7aa531ff3dcd2b6f01127a8426b00496889c365eb8c6c\" returns successfully" Aug 13 00:42:45.836987 containerd[1603]: time="2025-08-13T00:42:45.836425774Z" level=info msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" Aug 13 00:42:45.837874 containerd[1603]: time="2025-08-13T00:42:45.837825531Z" level=info msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.904 [INFO][4399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.904 [INFO][4399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" iface="eth0" netns="/var/run/netns/cni-d47c2428-399e-312e-4182-8efe5deac49c" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.905 [INFO][4399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" iface="eth0" netns="/var/run/netns/cni-d47c2428-399e-312e-4182-8efe5deac49c" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.905 [INFO][4399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" iface="eth0" netns="/var/run/netns/cni-d47c2428-399e-312e-4182-8efe5deac49c" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.905 [INFO][4399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.905 [INFO][4399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.941 [INFO][4412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.941 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.941 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.952 [WARNING][4412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.952 [INFO][4412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.955 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:45.963011 containerd[1603]: 2025-08-13 00:42:45.959 [INFO][4399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:42:45.966423 containerd[1603]: time="2025-08-13T00:42:45.966285208Z" level=info msg="TearDown network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" successfully" Aug 13 00:42:45.966423 containerd[1603]: time="2025-08-13T00:42:45.966331728Z" level=info msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" returns successfully" Aug 13 00:42:45.967612 systemd[1]: run-netns-cni\x2dd47c2428\x2d399e\x2d312e\x2d4182\x2d8efe5deac49c.mount: Deactivated successfully. Aug 13 00:42:45.982293 containerd[1603]: time="2025-08-13T00:42:45.982263617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dm6pb,Uid:40e29be8-46ac-4faf-8185-7148a795d441,Namespace:kube-system,Attempt:1,}" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.919 [INFO][4398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.919 [INFO][4398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" iface="eth0" netns="/var/run/netns/cni-86be53b4-6969-5298-e155-977c7e4178c8" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.920 [INFO][4398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" iface="eth0" netns="/var/run/netns/cni-86be53b4-6969-5298-e155-977c7e4178c8" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.920 [INFO][4398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" iface="eth0" netns="/var/run/netns/cni-86be53b4-6969-5298-e155-977c7e4178c8" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.920 [INFO][4398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.920 [INFO][4398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.947 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.948 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.955 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.972 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.972 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.976 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:45.983234 containerd[1603]: 2025-08-13 00:42:45.980 [INFO][4398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:42:45.985667 containerd[1603]: time="2025-08-13T00:42:45.983485935Z" level=info msg="TearDown network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" successfully" Aug 13 00:42:45.985667 containerd[1603]: time="2025-08-13T00:42:45.983530255Z" level=info msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" returns successfully" Aug 13 00:42:45.987403 containerd[1603]: time="2025-08-13T00:42:45.985963370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hcds2,Uid:09b573c1-fa3c-4342-84c0-9c27bccb5bed,Namespace:kube-system,Attempt:1,}" Aug 13 00:42:45.989446 systemd[1]: run-netns-cni\x2d86be53b4\x2d6969\x2d5298\x2de155\x2d977c7e4178c8.mount: Deactivated successfully. Aug 13 00:42:46.215725 systemd-networkd[1238]: calif72b3ef10aa: Link UP Aug 13 00:42:46.216779 systemd-networkd[1238]: calif72b3ef10aa: Gained carrier Aug 13 00:42:46.235136 kubelet[2732]: I0813 00:42:46.235062 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-558889c9d-bp9ks" podStartSLOduration=1.930787453 podStartE2EDuration="6.235038486s" podCreationTimestamp="2025-08-13 00:42:40 +0000 UTC" firstStartedPulling="2025-08-13 00:42:40.812041546 +0000 UTC m=+39.134640657" lastFinishedPulling="2025-08-13 00:42:45.116292579 +0000 UTC m=+43.438891690" observedRunningTime="2025-08-13 00:42:46.123829684 +0000 UTC m=+44.446428795" watchObservedRunningTime="2025-08-13 00:42:46.235038486 +0000 UTC m=+44.557637717" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.073 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0 coredns-7c65d6cfc9- kube-system 09b573c1-fa3c-4342-84c0-9c27bccb5bed 909 0 2025-08-13 00:42:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 coredns-7c65d6cfc9-hcds2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif72b3ef10aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.074 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.132 [INFO][4449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" HandleID="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.133 [INFO][4449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" HandleID="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-c-674096e178", "pod":"coredns-7c65d6cfc9-hcds2", "timestamp":"2025-08-13 00:42:46.132637388 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.133 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.133 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.133 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.158 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.164 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.171 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.173 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.179 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.180 [INFO][4449] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.181 [INFO][4449] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.190 [INFO][4449] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.200 [INFO][4449] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.194/26] block=192.168.125.192/26 handle="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.200 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.194/26] handle="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.200 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:46.236606 containerd[1603]: 2025-08-13 00:42:46.200 [INFO][4449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.194/26] IPv6=[] ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" HandleID="k8s-pod-network.be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.207 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09b573c1-fa3c-4342-84c0-9c27bccb5bed", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"coredns-7c65d6cfc9-hcds2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif72b3ef10aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.208 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.194/32] ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.208 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif72b3ef10aa ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.217 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.217 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09b573c1-fa3c-4342-84c0-9c27bccb5bed", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c", Pod:"coredns-7c65d6cfc9-hcds2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif72b3ef10aa", MAC:"a2:e2:3f:a1:a8:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:46.237322 containerd[1603]: 2025-08-13 00:42:46.232 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hcds2" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:42:46.260037 containerd[1603]: time="2025-08-13T00:42:46.259164643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:46.260037 containerd[1603]: time="2025-08-13T00:42:46.259223523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:46.260037 containerd[1603]: time="2025-08-13T00:42:46.259239523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:46.260037 containerd[1603]: time="2025-08-13T00:42:46.259329883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:46.321556 systemd-networkd[1238]: cali6c604e13cbe: Link UP Aug 13 00:42:46.321757 systemd-networkd[1238]: cali6c604e13cbe: Gained carrier Aug 13 00:42:46.340264 containerd[1603]: time="2025-08-13T00:42:46.340141499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hcds2,Uid:09b573c1-fa3c-4342-84c0-9c27bccb5bed,Namespace:kube-system,Attempt:1,} returns sandbox id \"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c\"" Aug 13 00:42:46.348051 containerd[1603]: time="2025-08-13T00:42:46.347907405Z" level=info msg="CreateContainer within sandbox \"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.075 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0 coredns-7c65d6cfc9- kube-system 40e29be8-46ac-4faf-8185-7148a795d441 908 0 2025-08-13 00:42:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 coredns-7c65d6cfc9-dm6pb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c604e13cbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.076 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.164 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" HandleID="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.164 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" HandleID="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-c-674096e178", "pod":"coredns-7c65d6cfc9-dm6pb", "timestamp":"2025-08-13 00:42:46.163990612 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.164 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.200 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.201 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.258 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.267 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.275 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.280 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.284 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.284 [INFO][4454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.287 [INFO][4454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130 Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.299 [INFO][4454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.310 [INFO][4454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.195/26] block=192.168.125.192/26 handle="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.310 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.195/26] handle="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.310 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:46.354187 containerd[1603]: 2025-08-13 00:42:46.310 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.195/26] IPv6=[] ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" HandleID="k8s-pod-network.6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.313 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"40e29be8-46ac-4faf-8185-7148a795d441", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"coredns-7c65d6cfc9-dm6pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c604e13cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.313 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.195/32] ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.313 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c604e13cbe ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.321 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.322 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"40e29be8-46ac-4faf-8185-7148a795d441", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130", Pod:"coredns-7c65d6cfc9-dm6pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c604e13cbe", MAC:"1e:0b:5a:2f:ea:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:46.355381 containerd[1603]: 2025-08-13 00:42:46.349 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dm6pb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:42:46.364125 containerd[1603]: time="2025-08-13T00:42:46.364047697Z" level=info msg="CreateContainer within sandbox \"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1a1ae56a2e47a353e0d06807d075a26f647320bb984527045ba019d75fc1136\"" Aug 13 00:42:46.365907 containerd[1603]: time="2025-08-13T00:42:46.365607494Z" level=info msg="StartContainer for \"c1a1ae56a2e47a353e0d06807d075a26f647320bb984527045ba019d75fc1136\"" Aug 13 00:42:46.382984 containerd[1603]: time="2025-08-13T00:42:46.382784223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:46.383087 containerd[1603]: time="2025-08-13T00:42:46.382945023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:46.383087 containerd[1603]: time="2025-08-13T00:42:46.382966503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:46.383168 containerd[1603]: time="2025-08-13T00:42:46.383122143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:46.435165 containerd[1603]: time="2025-08-13T00:42:46.435025291Z" level=info msg="StartContainer for \"c1a1ae56a2e47a353e0d06807d075a26f647320bb984527045ba019d75fc1136\" returns successfully" Aug 13 00:42:46.444151 containerd[1603]: time="2025-08-13T00:42:46.444116354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dm6pb,Uid:40e29be8-46ac-4faf-8185-7148a795d441,Namespace:kube-system,Attempt:1,} returns sandbox id \"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130\"" Aug 13 00:42:46.451063 containerd[1603]: time="2025-08-13T00:42:46.450921382Z" level=info msg="CreateContainer within sandbox \"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:42:46.468592 containerd[1603]: time="2025-08-13T00:42:46.468351191Z" level=info msg="CreateContainer within sandbox \"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7210cc36dcc99cbc0e40cb36b1d89a5a364a4b6a56ba8db71dd0ca6ad8e28bb8\"" Aug 13 00:42:46.471770 containerd[1603]: time="2025-08-13T00:42:46.469536829Z" level=info msg="StartContainer for \"7210cc36dcc99cbc0e40cb36b1d89a5a364a4b6a56ba8db71dd0ca6ad8e28bb8\"" Aug 13 00:42:46.529026 containerd[1603]: time="2025-08-13T00:42:46.528989204Z" level=info msg="StartContainer for \"7210cc36dcc99cbc0e40cb36b1d89a5a364a4b6a56ba8db71dd0ca6ad8e28bb8\" returns successfully" Aug 13 00:42:47.127076 kubelet[2732]: I0813 00:42:47.126831 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hcds2" podStartSLOduration=39.126810875 podStartE2EDuration="39.126810875s" podCreationTimestamp="2025-08-13 00:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:47.12401996 +0000 UTC m=+45.446619151" watchObservedRunningTime="2025-08-13 00:42:47.126810875 +0000 UTC m=+45.449410026" Aug 13 00:42:47.160282 kubelet[2732]: I0813 00:42:47.158920 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dm6pb" podStartSLOduration=39.158899701 podStartE2EDuration="39.158899701s" podCreationTimestamp="2025-08-13 00:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:47.157597384 +0000 UTC m=+45.480196535" watchObservedRunningTime="2025-08-13 00:42:47.158899701 +0000 UTC m=+45.481498812" Aug 13 00:42:47.340299 systemd-networkd[1238]: calif72b3ef10aa: Gained IPv6LL Aug 13 00:42:47.660327 systemd-networkd[1238]: cali6c604e13cbe: Gained IPv6LL Aug 13 00:42:47.828443 containerd[1603]: time="2025-08-13T00:42:47.827282828Z" level=info msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" Aug 13 00:42:47.828443 containerd[1603]: time="2025-08-13T00:42:47.827928987Z" level=info msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.903 [INFO][4664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.903 [INFO][4664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" iface="eth0" netns="/var/run/netns/cni-fea42315-a9b4-c539-1f89-1d8c5edb7c48" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.903 [INFO][4664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" iface="eth0" netns="/var/run/netns/cni-fea42315-a9b4-c539-1f89-1d8c5edb7c48" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.904 [INFO][4664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" iface="eth0" netns="/var/run/netns/cni-fea42315-a9b4-c539-1f89-1d8c5edb7c48" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.904 [INFO][4664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.904 [INFO][4664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.931 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.932 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.932 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.941 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.941 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.944 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:47.951489 containerd[1603]: 2025-08-13 00:42:47.948 [INFO][4664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:42:47.954967 containerd[1603]: time="2025-08-13T00:42:47.954287256Z" level=info msg="TearDown network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" successfully" Aug 13 00:42:47.955592 containerd[1603]: time="2025-08-13T00:42:47.955099735Z" level=info msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" returns successfully" Aug 13 00:42:47.957681 containerd[1603]: time="2025-08-13T00:42:47.957153651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb9ddbc7d-mqfxp,Uid:9c15bfda-2353-4522-94fd-e2dfc420915b,Namespace:calico-system,Attempt:1,}" Aug 13 00:42:47.958352 systemd[1]: run-netns-cni\x2dfea42315\x2da9b4\x2dc539\x2d1f89\x2d1d8c5edb7c48.mount: Deactivated successfully. Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.894 [INFO][4665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.895 [INFO][4665] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" iface="eth0" netns="/var/run/netns/cni-e9bc7b06-6255-6d43-ece8-8468c817b8f4" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.897 [INFO][4665] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" iface="eth0" netns="/var/run/netns/cni-e9bc7b06-6255-6d43-ece8-8468c817b8f4" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.898 [INFO][4665] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" iface="eth0" netns="/var/run/netns/cni-e9bc7b06-6255-6d43-ece8-8468c817b8f4" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.898 [INFO][4665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.898 [INFO][4665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.940 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.940 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.944 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.954 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.954 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.957 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:47.964630 containerd[1603]: 2025-08-13 00:42:47.961 [INFO][4665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:42:47.967134 containerd[1603]: time="2025-08-13T00:42:47.965217638Z" level=info msg="TearDown network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" successfully" Aug 13 00:42:47.967134 containerd[1603]: time="2025-08-13T00:42:47.965244038Z" level=info msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" returns successfully" Aug 13 00:42:47.967604 containerd[1603]: time="2025-08-13T00:42:47.967444994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-844tr,Uid:090fd80a-e98c-47af-a53e-06165e3cc066,Namespace:calico-system,Attempt:1,}" Aug 13 00:42:47.968952 systemd[1]: run-netns-cni\x2de9bc7b06\x2d6255\x2d6d43\x2dece8\x2d8468c817b8f4.mount: Deactivated successfully. Aug 13 00:42:48.148809 systemd-networkd[1238]: calib945008449a: Link UP Aug 13 00:42:48.150249 systemd-networkd[1238]: calib945008449a: Gained carrier Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.039 [INFO][4691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0 calico-kube-controllers-6bb9ddbc7d- calico-system 9c15bfda-2353-4522-94fd-e2dfc420915b 946 0 2025-08-13 00:42:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb9ddbc7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 calico-kube-controllers-6bb9ddbc7d-mqfxp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib945008449a [] [] }} ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.039 [INFO][4691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.084 [INFO][4714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" HandleID="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.084 [INFO][4714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" HandleID="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-c-674096e178", "pod":"calico-kube-controllers-6bb9ddbc7d-mqfxp", "timestamp":"2025-08-13 00:42:48.084480328 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.084 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.084 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.084 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.096 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.102 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.109 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.112 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.117 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.117 [INFO][4714] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.119 [INFO][4714] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.126 [INFO][4714] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.135 [INFO][4714] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.196/26] block=192.168.125.192/26 handle="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.135 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.196/26] handle="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.135 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:48.162652 containerd[1603]: 2025-08-13 00:42:48.135 [INFO][4714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.196/26] IPv6=[] ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" HandleID="k8s-pod-network.bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.140 [INFO][4691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0", GenerateName:"calico-kube-controllers-6bb9ddbc7d-", Namespace:"calico-system", SelfLink:"", UID:"9c15bfda-2353-4522-94fd-e2dfc420915b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb9ddbc7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"calico-kube-controllers-6bb9ddbc7d-mqfxp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib945008449a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.140 [INFO][4691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.196/32] ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.140 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib945008449a ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.144 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.145 [INFO][4691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0", GenerateName:"calico-kube-controllers-6bb9ddbc7d-", Namespace:"calico-system", SelfLink:"", UID:"9c15bfda-2353-4522-94fd-e2dfc420915b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb9ddbc7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff", Pod:"calico-kube-controllers-6bb9ddbc7d-mqfxp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib945008449a", MAC:"3e:2b:16:47:65:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:48.164142 containerd[1603]: 2025-08-13 00:42:48.159 [INFO][4691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff" Namespace="calico-system" Pod="calico-kube-controllers-6bb9ddbc7d-mqfxp" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:42:48.190932 containerd[1603]: time="2025-08-13T00:42:48.190494482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:48.190932 containerd[1603]: time="2025-08-13T00:42:48.190599002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:48.190932 containerd[1603]: time="2025-08-13T00:42:48.190620842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:48.190932 containerd[1603]: time="2025-08-13T00:42:48.190724602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:48.268295 systemd-networkd[1238]: calif1a8968ffdf: Link UP Aug 13 00:42:48.269343 systemd-networkd[1238]: calif1a8968ffdf: Gained carrier Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.050 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0 goldmane-58fd7646b9- calico-system 090fd80a-e98c-47af-a53e-06165e3cc066 945 0 2025-08-13 00:42:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 goldmane-58fd7646b9-844tr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif1a8968ffdf [] [] }} ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.050 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.090 [INFO][4720] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" HandleID="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.090 [INFO][4720] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" HandleID="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-c-674096e178", "pod":"goldmane-58fd7646b9-844tr", "timestamp":"2025-08-13 00:42:48.090683998 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.090 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.135 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.136 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.197 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.203 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.214 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.218 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.227 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.228 [INFO][4720] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.233 [INFO][4720] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24 Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.241 [INFO][4720] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.256 [INFO][4720] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.197/26] block=192.168.125.192/26 handle="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.256 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.197/26] handle="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.256 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:48.298553 containerd[1603]: 2025-08-13 00:42:48.257 [INFO][4720] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.197/26] IPv6=[] ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" HandleID="k8s-pod-network.3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.264 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"090fd80a-e98c-47af-a53e-06165e3cc066", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"goldmane-58fd7646b9-844tr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a8968ffdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.264 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.197/32] ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.264 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1a8968ffdf ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.269 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.269 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"090fd80a-e98c-47af-a53e-06165e3cc066", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24", Pod:"goldmane-58fd7646b9-844tr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a8968ffdf", MAC:"4e:ee:cb:01:fb:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:48.299132 containerd[1603]: 2025-08-13 00:42:48.292 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24" Namespace="calico-system" Pod="goldmane-58fd7646b9-844tr" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:42:48.322282 containerd[1603]: time="2025-08-13T00:42:48.321237558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb9ddbc7d-mqfxp,Uid:9c15bfda-2353-4522-94fd-e2dfc420915b,Namespace:calico-system,Attempt:1,} returns sandbox id \"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff\"" Aug 13 00:42:48.324269 containerd[1603]: time="2025-08-13T00:42:48.324048194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:42:48.337713 containerd[1603]: time="2025-08-13T00:42:48.337408693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:48.337713 containerd[1603]: time="2025-08-13T00:42:48.337466493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:48.337713 containerd[1603]: time="2025-08-13T00:42:48.337483213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:48.345782 containerd[1603]: time="2025-08-13T00:42:48.342388605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:48.407172 containerd[1603]: time="2025-08-13T00:42:48.407108024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-844tr,Uid:090fd80a-e98c-47af-a53e-06165e3cc066,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24\"" Aug 13 00:42:48.824605 containerd[1603]: time="2025-08-13T00:42:48.824533572Z" level=info msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.907 [INFO][4844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.907 [INFO][4844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" iface="eth0" netns="/var/run/netns/cni-92e7892a-fe17-c94b-03b6-b70f0a137587" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.908 [INFO][4844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" iface="eth0" netns="/var/run/netns/cni-92e7892a-fe17-c94b-03b6-b70f0a137587" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.909 [INFO][4844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" iface="eth0" netns="/var/run/netns/cni-92e7892a-fe17-c94b-03b6-b70f0a137587" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.909 [INFO][4844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.909 [INFO][4844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.930 [INFO][4852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.930 [INFO][4852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.930 [INFO][4852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.941 [WARNING][4852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.941 [INFO][4852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.943 [INFO][4852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:48.947144 containerd[1603]: 2025-08-13 00:42:48.945 [INFO][4844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:42:48.948006 containerd[1603]: time="2025-08-13T00:42:48.947637380Z" level=info msg="TearDown network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" successfully" Aug 13 00:42:48.948006 containerd[1603]: time="2025-08-13T00:42:48.947668260Z" level=info msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" returns successfully" Aug 13 00:42:48.950283 containerd[1603]: time="2025-08-13T00:42:48.949560417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-6t872,Uid:1beb254b-638d-4817-98ac-f5a8ad60ec6e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:42:48.964244 systemd[1]: run-netns-cni\x2d92e7892a\x2dfe17\x2dc94b\x2d03b6\x2db70f0a137587.mount: Deactivated successfully. Aug 13 00:42:49.094414 systemd-networkd[1238]: calif48def4e5cc: Link UP Aug 13 00:42:49.095074 systemd-networkd[1238]: calif48def4e5cc: Gained carrier Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.000 [INFO][4859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0 calico-apiserver-6fcb999d87- calico-apiserver 1beb254b-638d-4817-98ac-f5a8ad60ec6e 957 0 2025-08-13 00:42:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcb999d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 calico-apiserver-6fcb999d87-6t872 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif48def4e5cc [] [] }} ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.000 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.035 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" HandleID="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.036 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" HandleID="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-c-674096e178", "pod":"calico-apiserver-6fcb999d87-6t872", "timestamp":"2025-08-13 00:42:49.035838605 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.036 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.036 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.036 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.049 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.059 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.066 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.068 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.072 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.072 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.074 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.079 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.088 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.198/26] block=192.168.125.192/26 handle="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.088 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.198/26] handle="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.088 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:49.117171 containerd[1603]: 2025-08-13 00:42:49.088 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.198/26] IPv6=[] ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" HandleID="k8s-pod-network.ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.091 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"1beb254b-638d-4817-98ac-f5a8ad60ec6e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"calico-apiserver-6fcb999d87-6t872", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif48def4e5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.091 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.198/32] ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.091 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif48def4e5cc ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.095 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.096 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"1beb254b-638d-4817-98ac-f5a8ad60ec6e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c", Pod:"calico-apiserver-6fcb999d87-6t872", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif48def4e5cc", MAC:"da:4d:82:d8:e8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:49.117801 containerd[1603]: 2025-08-13 00:42:49.111 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-6t872" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:42:49.145193 containerd[1603]: time="2025-08-13T00:42:49.144449086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:49.145193 containerd[1603]: time="2025-08-13T00:42:49.144530486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:49.145193 containerd[1603]: time="2025-08-13T00:42:49.144546526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:49.145193 containerd[1603]: time="2025-08-13T00:42:49.144633606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:49.208714 containerd[1603]: time="2025-08-13T00:42:49.208662232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-6t872,Uid:1beb254b-638d-4817-98ac-f5a8ad60ec6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c\"" Aug 13 00:42:49.516821 systemd-networkd[1238]: calib945008449a: Gained IPv6LL Aug 13 00:42:49.827389 containerd[1603]: time="2025-08-13T00:42:49.826657567Z" level=info msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" Aug 13 00:42:49.828004 containerd[1603]: time="2025-08-13T00:42:49.826659887Z" level=info msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" Aug 13 00:42:49.900239 systemd-networkd[1238]: calif1a8968ffdf: Gained IPv6LL Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.905 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.905 [INFO][4948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" iface="eth0" netns="/var/run/netns/cni-9d579b29-6e7a-f9e9-3ed4-2594ebc74070" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.906 [INFO][4948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" iface="eth0" netns="/var/run/netns/cni-9d579b29-6e7a-f9e9-3ed4-2594ebc74070" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.906 [INFO][4948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" iface="eth0" netns="/var/run/netns/cni-9d579b29-6e7a-f9e9-3ed4-2594ebc74070" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.906 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.906 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.929 [INFO][4962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.930 [INFO][4962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.930 [INFO][4962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.940 [WARNING][4962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.940 [INFO][4962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.942 [INFO][4962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:49.948908 containerd[1603]: 2025-08-13 00:42:49.944 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:42:49.948908 containerd[1603]: time="2025-08-13T00:42:49.947448310Z" level=info msg="TearDown network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" successfully" Aug 13 00:42:49.948908 containerd[1603]: time="2025-08-13T00:42:49.947480110Z" level=info msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" returns successfully" Aug 13 00:42:49.951954 containerd[1603]: time="2025-08-13T00:42:49.951199665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-pw4vs,Uid:0df81e4e-8fb8-429d-9966-a87b9cc013c8,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:42:49.958258 systemd[1]: run-netns-cni\x2d9d579b29\x2d6e7a\x2df9e9\x2d3ed4\x2d2594ebc74070.mount: Deactivated successfully. Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.895 [INFO][4944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.895 [INFO][4944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" iface="eth0" netns="/var/run/netns/cni-e172fa3b-a49b-fd53-080f-fb5f8a21d661" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.896 [INFO][4944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" iface="eth0" netns="/var/run/netns/cni-e172fa3b-a49b-fd53-080f-fb5f8a21d661" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.897 [INFO][4944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" iface="eth0" netns="/var/run/netns/cni-e172fa3b-a49b-fd53-080f-fb5f8a21d661" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.897 [INFO][4944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.897 [INFO][4944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.932 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.932 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.942 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.964 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.966 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.970 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:49.983240 containerd[1603]: 2025-08-13 00:42:49.978 [INFO][4944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:42:49.985384 containerd[1603]: time="2025-08-13T00:42:49.983726977Z" level=info msg="TearDown network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" successfully" Aug 13 00:42:49.985384 containerd[1603]: time="2025-08-13T00:42:49.983776377Z" level=info msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" returns successfully" Aug 13 00:42:49.988941 containerd[1603]: time="2025-08-13T00:42:49.986141054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cldrb,Uid:4817502a-aff6-4c70-b804-8c5d92350237,Namespace:calico-system,Attempt:1,}" Aug 13 00:42:49.988665 systemd[1]: run-netns-cni\x2de172fa3b\x2da49b\x2dfd53\x2d080f\x2dfb5f8a21d661.mount: Deactivated successfully. Aug 13 00:42:50.164059 systemd-networkd[1238]: caliae571d036be: Link UP Aug 13 00:42:50.164230 systemd-networkd[1238]: caliae571d036be: Gained carrier Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.053 [INFO][4972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0 calico-apiserver-6fcb999d87- calico-apiserver 0df81e4e-8fb8-429d-9966-a87b9cc013c8 967 0 2025-08-13 00:42:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcb999d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 calico-apiserver-6fcb999d87-pw4vs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae571d036be [] [] }} ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.053 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.094 [INFO][4995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" HandleID="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.095 [INFO][4995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" HandleID="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-c-674096e178", "pod":"calico-apiserver-6fcb999d87-pw4vs", "timestamp":"2025-08-13 00:42:50.094682143 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.095 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.095 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.095 [INFO][4995] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.110 [INFO][4995] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.118 [INFO][4995] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.124 [INFO][4995] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.129 [INFO][4995] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.133 [INFO][4995] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.133 [INFO][4995] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.137 [INFO][4995] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379 Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.143 [INFO][4995] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.151 [INFO][4995] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.199/26] block=192.168.125.192/26 handle="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.151 [INFO][4995] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.199/26] handle="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.151 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:50.187772 containerd[1603]: 2025-08-13 00:42:50.151 [INFO][4995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.199/26] IPv6=[] ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" HandleID="k8s-pod-network.d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.157 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df81e4e-8fb8-429d-9966-a87b9cc013c8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"calico-apiserver-6fcb999d87-pw4vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae571d036be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.157 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.199/32] ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.157 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae571d036be ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.161 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.162 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df81e4e-8fb8-429d-9966-a87b9cc013c8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379", Pod:"calico-apiserver-6fcb999d87-pw4vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae571d036be", MAC:"4a:5f:21:4c:48:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:50.189268 containerd[1603]: 2025-08-13 00:42:50.181 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb999d87-pw4vs" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:42:50.230197 containerd[1603]: time="2025-08-13T00:42:50.229050279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:50.230197 containerd[1603]: time="2025-08-13T00:42:50.229788118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:50.230197 containerd[1603]: time="2025-08-13T00:42:50.229851318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:50.230197 containerd[1603]: time="2025-08-13T00:42:50.230039998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:50.296174 systemd-networkd[1238]: caliaf54f6e1584: Link UP Aug 13 00:42:50.301011 systemd-networkd[1238]: caliaf54f6e1584: Gained carrier Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.058 [INFO][4981] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0 csi-node-driver- calico-system 4817502a-aff6-4c70-b804-8c5d92350237 966 0 2025-08-13 00:42:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-c-674096e178 csi-node-driver-cldrb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf54f6e1584 [] [] }} ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.058 [INFO][4981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.103 [INFO][5000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" HandleID="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.103 [INFO][5000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" HandleID="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-c-674096e178", "pod":"csi-node-driver-cldrb", "timestamp":"2025-08-13 00:42:50.103634371 +0000 UTC"}, Hostname:"ci-4081-3-5-c-674096e178", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.103 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.151 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.152 [INFO][5000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-c-674096e178' Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.212 [INFO][5000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.225 [INFO][5000] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.233 [INFO][5000] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.237 [INFO][5000] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.244 [INFO][5000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.244 [INFO][5000] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.248 [INFO][5000] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.254 [INFO][5000] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.273 [INFO][5000] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.200/26] block=192.168.125.192/26 handle="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.273 [INFO][5000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.200/26] handle="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" host="ci-4081-3-5-c-674096e178" Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.273 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:42:50.334189 containerd[1603]: 2025-08-13 00:42:50.273 [INFO][5000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.200/26] IPv6=[] ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" HandleID="k8s-pod-network.44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.281 [INFO][4981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4817502a-aff6-4c70-b804-8c5d92350237", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"", Pod:"csi-node-driver-cldrb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf54f6e1584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.282 [INFO][4981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.200/32] ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.282 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf54f6e1584 ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.300 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.308 [INFO][4981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4817502a-aff6-4c70-b804-8c5d92350237", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f", Pod:"csi-node-driver-cldrb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf54f6e1584", MAC:"6a:c6:8b:16:62:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:42:50.335813 containerd[1603]: 2025-08-13 00:42:50.320 [INFO][4981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f" Namespace="calico-system" Pod="csi-node-driver-cldrb" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:42:50.404284 containerd[1603]: time="2025-08-13T00:42:50.404015919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:42:50.404284 containerd[1603]: time="2025-08-13T00:42:50.404101759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:42:50.404284 containerd[1603]: time="2025-08-13T00:42:50.404133319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:50.404630 containerd[1603]: time="2025-08-13T00:42:50.404449758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:42:50.406870 containerd[1603]: time="2025-08-13T00:42:50.406826155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb999d87-pw4vs,Uid:0df81e4e-8fb8-429d-9966-a87b9cc013c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379\"" Aug 13 00:42:50.461600 containerd[1603]: time="2025-08-13T00:42:50.461553760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cldrb,Uid:4817502a-aff6-4c70-b804-8c5d92350237,Namespace:calico-system,Attempt:1,} returns sandbox id \"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f\"" Aug 13 00:42:50.924568 systemd-networkd[1238]: calif48def4e5cc: Gained IPv6LL Aug 13 00:42:51.308394 systemd-networkd[1238]: caliae571d036be: Gained IPv6LL Aug 13 00:42:51.767375 containerd[1603]: time="2025-08-13T00:42:51.767284974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:51.769053 containerd[1603]: time="2025-08-13T00:42:51.769003081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:42:51.769768 containerd[1603]: time="2025-08-13T00:42:51.769381327Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:51.772455 containerd[1603]: time="2025-08-13T00:42:51.772384093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:51.773362 containerd[1603]: time="2025-08-13T00:42:51.773049063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.448962189s" Aug 13 00:42:51.773362 containerd[1603]: time="2025-08-13T00:42:51.773088464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:42:51.775681 containerd[1603]: time="2025-08-13T00:42:51.775110095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:42:51.788210 containerd[1603]: time="2025-08-13T00:42:51.788176736Z" level=info msg="CreateContainer within sandbox \"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:42:51.806091 containerd[1603]: time="2025-08-13T00:42:51.806051931Z" level=info msg="CreateContainer within sandbox \"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9ed8f66a8b025aeccf90cc8c2bd23493706113fb98cbdcc3ede7895212aafaa5\"" Aug 13 00:42:51.808792 containerd[1603]: time="2025-08-13T00:42:51.808744493Z" level=info msg="StartContainer for \"9ed8f66a8b025aeccf90cc8c2bd23493706113fb98cbdcc3ede7895212aafaa5\"" Aug 13 00:42:51.881751 containerd[1603]: time="2025-08-13T00:42:51.881694256Z" level=info msg="StartContainer for \"9ed8f66a8b025aeccf90cc8c2bd23493706113fb98cbdcc3ede7895212aafaa5\" returns successfully" Aug 13 00:42:52.175124 kubelet[2732]: I0813 00:42:52.174963 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb9ddbc7d-mqfxp" podStartSLOduration=23.724143046000002 podStartE2EDuration="27.174874296s" podCreationTimestamp="2025-08-13 00:42:25 +0000 UTC" firstStartedPulling="2025-08-13 00:42:48.323657514 +0000 UTC m=+46.646256625" lastFinishedPulling="2025-08-13 00:42:51.774388724 +0000 UTC m=+50.096987875" observedRunningTime="2025-08-13 00:42:52.174001203 +0000 UTC m=+50.496600354" watchObservedRunningTime="2025-08-13 00:42:52.174874296 +0000 UTC m=+50.497473407" Aug 13 00:42:52.333774 systemd-networkd[1238]: caliaf54f6e1584: Gained IPv6LL Aug 13 00:42:54.275760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696387036.mount: Deactivated successfully. Aug 13 00:42:54.913555 containerd[1603]: time="2025-08-13T00:42:54.913222249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:54.917562 containerd[1603]: time="2025-08-13T00:42:54.916225772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:42:54.922044 containerd[1603]: time="2025-08-13T00:42:54.921279763Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:54.927724 containerd[1603]: time="2025-08-13T00:42:54.926768961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:54.929910 containerd[1603]: time="2025-08-13T00:42:54.929851444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.154703349s" Aug 13 00:42:54.930328 containerd[1603]: time="2025-08-13T00:42:54.930303011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:42:54.936924 containerd[1603]: time="2025-08-13T00:42:54.936597740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:42:54.938487 containerd[1603]: time="2025-08-13T00:42:54.938353965Z" level=info msg="CreateContainer within sandbox \"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:42:54.965776 containerd[1603]: time="2025-08-13T00:42:54.965734712Z" level=info msg="CreateContainer within sandbox \"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9ed4720ee207be592fc19ace1d39ffbaeacfa165666d3e24cd2632811f826cb5\"" Aug 13 00:42:54.968184 containerd[1603]: time="2025-08-13T00:42:54.968015505Z" level=info msg="StartContainer for \"9ed4720ee207be592fc19ace1d39ffbaeacfa165666d3e24cd2632811f826cb5\"" Aug 13 00:42:55.052213 containerd[1603]: time="2025-08-13T00:42:55.052101515Z" level=info msg="StartContainer for \"9ed4720ee207be592fc19ace1d39ffbaeacfa165666d3e24cd2632811f826cb5\" returns successfully" Aug 13 00:42:58.225141 containerd[1603]: time="2025-08-13T00:42:58.225038307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:58.226798 containerd[1603]: time="2025-08-13T00:42:58.226603167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:42:58.229168 containerd[1603]: time="2025-08-13T00:42:58.229119558Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:58.237256 containerd[1603]: time="2025-08-13T00:42:58.237085099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:58.239430 containerd[1603]: time="2025-08-13T00:42:58.239241927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.3021737s" Aug 13 00:42:58.239430 containerd[1603]: time="2025-08-13T00:42:58.239303927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:42:58.241471 containerd[1603]: time="2025-08-13T00:42:58.241099550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:42:58.244057 containerd[1603]: time="2025-08-13T00:42:58.243993907Z" level=info msg="CreateContainer within sandbox \"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:42:58.264106 containerd[1603]: time="2025-08-13T00:42:58.263768837Z" level=info msg="CreateContainer within sandbox \"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eff8ebbc6c6e0fca6702ad865ec0eb07b6e42fffcacf8f7c80a9a73c71c46d36\"" Aug 13 00:42:58.265943 containerd[1603]: time="2025-08-13T00:42:58.265125495Z" level=info msg="StartContainer for \"eff8ebbc6c6e0fca6702ad865ec0eb07b6e42fffcacf8f7c80a9a73c71c46d36\"" Aug 13 00:42:58.317203 systemd[1]: run-containerd-runc-k8s.io-eff8ebbc6c6e0fca6702ad865ec0eb07b6e42fffcacf8f7c80a9a73c71c46d36-runc.kLgYy9.mount: Deactivated successfully. Aug 13 00:42:58.367874 containerd[1603]: time="2025-08-13T00:42:58.367824156Z" level=info msg="StartContainer for \"eff8ebbc6c6e0fca6702ad865ec0eb07b6e42fffcacf8f7c80a9a73c71c46d36\" returns successfully" Aug 13 00:42:59.198665 kubelet[2732]: I0813 00:42:59.197211 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcb999d87-6t872" podStartSLOduration=30.167011205 podStartE2EDuration="39.197187357s" podCreationTimestamp="2025-08-13 00:42:20 +0000 UTC" firstStartedPulling="2025-08-13 00:42:49.21026399 +0000 UTC m=+47.532863101" lastFinishedPulling="2025-08-13 00:42:58.240440062 +0000 UTC m=+56.563039253" observedRunningTime="2025-08-13 00:42:59.196119623 +0000 UTC m=+57.518718814" watchObservedRunningTime="2025-08-13 00:42:59.197187357 +0000 UTC m=+57.519786508" Aug 13 00:42:59.200997 kubelet[2732]: I0813 00:42:59.199059 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-844tr" podStartSLOduration=27.674618545 podStartE2EDuration="34.199033779s" podCreationTimestamp="2025-08-13 00:42:25 +0000 UTC" firstStartedPulling="2025-08-13 00:42:48.409008541 +0000 UTC m=+46.731607652" lastFinishedPulling="2025-08-13 00:42:54.933423655 +0000 UTC m=+53.256022886" observedRunningTime="2025-08-13 00:42:55.182993437 +0000 UTC m=+53.505592508" watchObservedRunningTime="2025-08-13 00:42:59.199033779 +0000 UTC m=+57.521632890" Aug 13 00:42:59.229080 containerd[1603]: time="2025-08-13T00:42:59.227964456Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:59.231685 containerd[1603]: time="2025-08-13T00:42:59.231626261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:42:59.236818 containerd[1603]: time="2025-08-13T00:42:59.236775725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 995.610973ms" Aug 13 00:42:59.236995 containerd[1603]: time="2025-08-13T00:42:59.236977447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:42:59.238321 containerd[1603]: time="2025-08-13T00:42:59.238188662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:42:59.244141 containerd[1603]: time="2025-08-13T00:42:59.244047294Z" level=info msg="CreateContainer within sandbox \"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:42:59.278776 containerd[1603]: time="2025-08-13T00:42:59.278698481Z" level=info msg="CreateContainer within sandbox \"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5e462176062e46131dc69b3384b896cce679f71844e79d162d94c26e33b21025\"" Aug 13 00:42:59.282063 containerd[1603]: time="2025-08-13T00:42:59.281023350Z" level=info msg="StartContainer for \"5e462176062e46131dc69b3384b896cce679f71844e79d162d94c26e33b21025\"" Aug 13 00:42:59.359895 containerd[1603]: time="2025-08-13T00:42:59.359063752Z" level=info msg="StartContainer for \"5e462176062e46131dc69b3384b896cce679f71844e79d162d94c26e33b21025\" returns successfully" Aug 13 00:43:00.186085 kubelet[2732]: I0813 00:43:00.186052 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:00.201401 kubelet[2732]: I0813 00:43:00.201330 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcb999d87-pw4vs" podStartSLOduration=31.372828802 podStartE2EDuration="40.201312028s" podCreationTimestamp="2025-08-13 00:42:20 +0000 UTC" firstStartedPulling="2025-08-13 00:42:50.409280991 +0000 UTC m=+48.731880102" lastFinishedPulling="2025-08-13 00:42:59.237764257 +0000 UTC m=+57.560363328" observedRunningTime="2025-08-13 00:43:00.200393617 +0000 UTC m=+58.522992728" watchObservedRunningTime="2025-08-13 00:43:00.201312028 +0000 UTC m=+58.523911139" Aug 13 00:43:00.854907 containerd[1603]: time="2025-08-13T00:43:00.854540344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:00.856165 containerd[1603]: time="2025-08-13T00:43:00.856060322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:43:00.858758 containerd[1603]: time="2025-08-13T00:43:00.857629061Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:00.861343 containerd[1603]: time="2025-08-13T00:43:00.861304905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:00.867744 containerd[1603]: time="2025-08-13T00:43:00.867709262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.62948196s" Aug 13 00:43:00.867915 containerd[1603]: time="2025-08-13T00:43:00.867896064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:43:00.879577 containerd[1603]: time="2025-08-13T00:43:00.879532404Z" level=info msg="CreateContainer within sandbox \"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:43:00.896918 containerd[1603]: time="2025-08-13T00:43:00.896790971Z" level=info msg="CreateContainer within sandbox \"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6cef36f35a058c3236dd9a2d6f31125b68615fdee24614738243262f69eb2aaf\"" Aug 13 00:43:00.898428 containerd[1603]: time="2025-08-13T00:43:00.898406790Z" level=info msg="StartContainer for \"6cef36f35a058c3236dd9a2d6f31125b68615fdee24614738243262f69eb2aaf\"" Aug 13 00:43:01.024668 containerd[1603]: time="2025-08-13T00:43:01.023951609Z" level=info msg="StartContainer for \"6cef36f35a058c3236dd9a2d6f31125b68615fdee24614738243262f69eb2aaf\" returns successfully" Aug 13 00:43:01.029107 containerd[1603]: time="2025-08-13T00:43:01.028500622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:43:01.846907 containerd[1603]: time="2025-08-13T00:43:01.846674214Z" level=info msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.894 [WARNING][5439] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4817502a-aff6-4c70-b804-8c5d92350237", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f", Pod:"csi-node-driver-cldrb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf54f6e1584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.894 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.894 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" iface="eth0" netns="" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.894 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.894 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.946 [INFO][5446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.946 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.947 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.962 [WARNING][5446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.962 [INFO][5446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.964 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:01.969213 containerd[1603]: 2025-08-13 00:43:01.966 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:01.970394 containerd[1603]: time="2025-08-13T00:43:01.969968173Z" level=info msg="TearDown network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" successfully" Aug 13 00:43:01.970394 containerd[1603]: time="2025-08-13T00:43:01.969998213Z" level=info msg="StopPodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" returns successfully" Aug 13 00:43:01.972039 containerd[1603]: time="2025-08-13T00:43:01.971993837Z" level=info msg="RemovePodSandbox for \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" Aug 13 00:43:01.972151 containerd[1603]: time="2025-08-13T00:43:01.972047397Z" level=info msg="Forcibly stopping sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\"" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.028 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4817502a-aff6-4c70-b804-8c5d92350237", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f", Pod:"csi-node-driver-cldrb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf54f6e1584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.029 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.029 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" iface="eth0" netns="" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.029 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.029 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.057 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.058 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.058 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.069 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.069 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" HandleID="k8s-pod-network.1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Workload="ci--4081--3--5--c--674096e178-k8s-csi--node--driver--cldrb-eth0" Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.072 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.077942 containerd[1603]: 2025-08-13 00:43:02.076 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044" Aug 13 00:43:02.078436 containerd[1603]: time="2025-08-13T00:43:02.078007170Z" level=info msg="TearDown network for sandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" successfully" Aug 13 00:43:02.084081 containerd[1603]: time="2025-08-13T00:43:02.084008999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:02.084836 containerd[1603]: time="2025-08-13T00:43:02.084132600Z" level=info msg="RemovePodSandbox \"1877f4ef0982b80625ae89d3d1c58b7f406ff9b89fcdfa1815051b2dd5125044\" returns successfully" Aug 13 00:43:02.084970 containerd[1603]: time="2025-08-13T00:43:02.084844928Z" level=info msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.169 [WARNING][5482] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"40e29be8-46ac-4faf-8185-7148a795d441", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130", Pod:"coredns-7c65d6cfc9-dm6pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c604e13cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.170 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.170 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" iface="eth0" netns="" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.170 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.170 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.236 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.242 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.242 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.258 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.258 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.260 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.264483 containerd[1603]: 2025-08-13 00:43:02.261 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.265243 containerd[1603]: time="2025-08-13T00:43:02.264521970Z" level=info msg="TearDown network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" successfully" Aug 13 00:43:02.265243 containerd[1603]: time="2025-08-13T00:43:02.264552250Z" level=info msg="StopPodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" returns successfully" Aug 13 00:43:02.265978 containerd[1603]: time="2025-08-13T00:43:02.265746664Z" level=info msg="RemovePodSandbox for \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" Aug 13 00:43:02.265978 containerd[1603]: time="2025-08-13T00:43:02.265925306Z" level=info msg="Forcibly stopping sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\"" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.327 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"40e29be8-46ac-4faf-8185-7148a795d441", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"6628fa7bef9519d552ab0cfdd7bb33c1d8e55148939fc35b63cc894e2763d130", Pod:"coredns-7c65d6cfc9-dm6pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c604e13cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.327 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.327 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" iface="eth0" netns="" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.327 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.327 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.357 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.357 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.357 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.372 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.372 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" HandleID="k8s-pod-network.3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--dm6pb-eth0" Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.375 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.381022 containerd[1603]: 2025-08-13 00:43:02.378 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179" Aug 13 00:43:02.381022 containerd[1603]: time="2025-08-13T00:43:02.380940173Z" level=info msg="TearDown network for sandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" successfully" Aug 13 00:43:02.387395 containerd[1603]: time="2025-08-13T00:43:02.386226073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:02.387395 containerd[1603]: time="2025-08-13T00:43:02.386367634Z" level=info msg="RemovePodSandbox \"3d8ef7c8beb33459e3724c2644c60532cf2e1081efe08b65aab14a673b6cb179\" returns successfully" Aug 13 00:43:02.387395 containerd[1603]: time="2025-08-13T00:43:02.387147403Z" level=info msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.437 [WARNING][5532] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.437 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.437 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" iface="eth0" netns="" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.437 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.437 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.470 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.470 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.470 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.480 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.480 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.483 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.491141 containerd[1603]: 2025-08-13 00:43:02.489 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.491799 containerd[1603]: time="2025-08-13T00:43:02.491767472Z" level=info msg="TearDown network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" successfully" Aug 13 00:43:02.491909 containerd[1603]: time="2025-08-13T00:43:02.491873033Z" level=info msg="StopPodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" returns successfully" Aug 13 00:43:02.493749 containerd[1603]: time="2025-08-13T00:43:02.493677454Z" level=info msg="RemovePodSandbox for \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" Aug 13 00:43:02.494277 containerd[1603]: time="2025-08-13T00:43:02.494254540Z" level=info msg="Forcibly stopping sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\"" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.549 [WARNING][5553] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" WorkloadEndpoint="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.549 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.550 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" iface="eth0" netns="" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.550 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.550 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.580 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.580 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.580 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.590 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.591 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" HandleID="k8s-pod-network.d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Workload="ci--4081--3--5--c--674096e178-k8s-whisker--697d76b--vp9n6-eth0" Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.593 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.597245 containerd[1603]: 2025-08-13 00:43:02.594 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95" Aug 13 00:43:02.598664 containerd[1603]: time="2025-08-13T00:43:02.598328363Z" level=info msg="TearDown network for sandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" successfully" Aug 13 00:43:02.606712 containerd[1603]: time="2025-08-13T00:43:02.606479655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:02.606712 containerd[1603]: time="2025-08-13T00:43:02.606587537Z" level=info msg="RemovePodSandbox \"d784b52ab88725289206d3a5d49a44c07240e5f87c04f812e08475104503ec95\" returns successfully" Aug 13 00:43:02.607459 containerd[1603]: time="2025-08-13T00:43:02.607135343Z" level=info msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" Aug 13 00:43:02.615814 containerd[1603]: time="2025-08-13T00:43:02.615758481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:02.621197 containerd[1603]: time="2025-08-13T00:43:02.621138142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:43:02.624407 containerd[1603]: time="2025-08-13T00:43:02.624287418Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:02.660692 containerd[1603]: time="2025-08-13T00:43:02.660523869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:02.664819 containerd[1603]: time="2025-08-13T00:43:02.664668797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.636122254s" Aug 13 00:43:02.664819 containerd[1603]: time="2025-08-13T00:43:02.664819318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:43:02.670328 containerd[1603]: time="2025-08-13T00:43:02.670258660Z" level=info msg="CreateContainer within sandbox \"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:43:02.690825 containerd[1603]: time="2025-08-13T00:43:02.690752693Z" level=info msg="CreateContainer within sandbox \"44dd70159dafa42d3ee0782a1a5f9e1b24261bec6d135b3d6fd711443918c28f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebc36d762e69f82823db3b7720a7a9b68fbbf132eb3fe47cfa650463fba31a9d\"" Aug 13 00:43:02.693648 containerd[1603]: time="2025-08-13T00:43:02.693368003Z" level=info msg="StartContainer for \"ebc36d762e69f82823db3b7720a7a9b68fbbf132eb3fe47cfa650463fba31a9d\"" Aug 13 00:43:02.738785 systemd[1]: run-containerd-runc-k8s.io-ebc36d762e69f82823db3b7720a7a9b68fbbf132eb3fe47cfa650463fba31a9d-runc.oBc14v.mount: Deactivated successfully. Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.663 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09b573c1-fa3c-4342-84c0-9c27bccb5bed", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c", Pod:"coredns-7c65d6cfc9-hcds2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif72b3ef10aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.663 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.663 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" iface="eth0" netns="" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.663 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.663 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.700 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.700 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.700 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.722 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.722 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.731 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.746174 containerd[1603]: 2025-08-13 00:43:02.744 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.746174 containerd[1603]: time="2025-08-13T00:43:02.746012801Z" level=info msg="TearDown network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" successfully" Aug 13 00:43:02.746174 containerd[1603]: time="2025-08-13T00:43:02.746054801Z" level=info msg="StopPodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" returns successfully" Aug 13 00:43:02.747318 containerd[1603]: time="2025-08-13T00:43:02.747291375Z" level=info msg="RemovePodSandbox for \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" Aug 13 00:43:02.747490 containerd[1603]: time="2025-08-13T00:43:02.747441697Z" level=info msg="Forcibly stopping sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\"" Aug 13 00:43:02.782077 containerd[1603]: time="2025-08-13T00:43:02.782028810Z" level=info msg="StartContainer for \"ebc36d762e69f82823db3b7720a7a9b68fbbf132eb3fe47cfa650463fba31a9d\" returns successfully" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.815 [WARNING][5618] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09b573c1-fa3c-4342-84c0-9c27bccb5bed", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"be50848914b209b00e34f15638f6110cc64090d2b1b2ca986dbd1bfd5623061c", Pod:"coredns-7c65d6cfc9-hcds2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif72b3ef10aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.816 [INFO][5618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.816 [INFO][5618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" iface="eth0" netns="" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.816 [INFO][5618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.816 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.840 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.840 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.840 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.855 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.855 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" HandleID="k8s-pod-network.06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Workload="ci--4081--3--5--c--674096e178-k8s-coredns--7c65d6cfc9--hcds2-eth0" Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.858 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.864409 containerd[1603]: 2025-08-13 00:43:02.859 [INFO][5618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619" Aug 13 00:43:02.866231 containerd[1603]: time="2025-08-13T00:43:02.864389026Z" level=info msg="TearDown network for sandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" successfully" Aug 13 00:43:02.883328 containerd[1603]: time="2025-08-13T00:43:02.883231360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:02.883328 containerd[1603]: time="2025-08-13T00:43:02.883319601Z" level=info msg="RemovePodSandbox \"06db12d6bdb50940bfc77c41abd42923399b4eb006b71b28d331d3a82d8c8619\" returns successfully" Aug 13 00:43:02.884390 containerd[1603]: time="2025-08-13T00:43:02.883964048Z" level=info msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" Aug 13 00:43:02.977293 kubelet[2732]: I0813 00:43:02.977166 2732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.924 [WARNING][5652] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0", GenerateName:"calico-kube-controllers-6bb9ddbc7d-", Namespace:"calico-system", SelfLink:"", UID:"9c15bfda-2353-4522-94fd-e2dfc420915b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb9ddbc7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff", Pod:"calico-kube-controllers-6bb9ddbc7d-mqfxp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib945008449a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.925 [INFO][5652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.925 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" iface="eth0" netns="" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.925 [INFO][5652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.925 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.952 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.953 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.953 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.969 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.970 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.974 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:02.978510 containerd[1603]: 2025-08-13 00:43:02.976 [INFO][5652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:02.979223 containerd[1603]: time="2025-08-13T00:43:02.978538723Z" level=info msg="TearDown network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" successfully" Aug 13 00:43:02.979223 containerd[1603]: time="2025-08-13T00:43:02.978587084Z" level=info msg="StopPodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" returns successfully" Aug 13 00:43:02.979223 containerd[1603]: time="2025-08-13T00:43:02.979093969Z" level=info msg="RemovePodSandbox for \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" Aug 13 00:43:02.979223 containerd[1603]: time="2025-08-13T00:43:02.979125690Z" level=info msg="Forcibly stopping sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\"" Aug 13 00:43:02.982673 kubelet[2732]: I0813 00:43:02.981654 2732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.044 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0", GenerateName:"calico-kube-controllers-6bb9ddbc7d-", Namespace:"calico-system", SelfLink:"", UID:"9c15bfda-2353-4522-94fd-e2dfc420915b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb9ddbc7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"bcaf9a039f928ca8b66f56b2a4cc4115be32a9361021214e541741c5779d72ff", Pod:"calico-kube-controllers-6bb9ddbc7d-mqfxp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib945008449a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.045 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.045 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" iface="eth0" netns="" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.045 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.045 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.075 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.075 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.076 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.089 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.089 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" HandleID="k8s-pod-network.b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Workload="ci--4081--3--5--c--674096e178-k8s-calico--kube--controllers--6bb9ddbc7d--mqfxp-eth0" Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.091 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.095970 containerd[1603]: 2025-08-13 00:43:03.093 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717" Aug 13 00:43:03.096499 containerd[1603]: time="2025-08-13T00:43:03.096032469Z" level=info msg="TearDown network for sandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" successfully" Aug 13 00:43:03.100841 containerd[1603]: time="2025-08-13T00:43:03.100758802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:03.101037 containerd[1603]: time="2025-08-13T00:43:03.100925284Z" level=info msg="RemovePodSandbox \"b74be2a6e92df5b56e75dd5152498b3963e5062245327ac940accafaf1827717\" returns successfully" Aug 13 00:43:03.101550 containerd[1603]: time="2025-08-13T00:43:03.101509730Z" level=info msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.150 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"1beb254b-638d-4817-98ac-f5a8ad60ec6e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c", Pod:"calico-apiserver-6fcb999d87-6t872", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif48def4e5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.151 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.151 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" iface="eth0" netns="" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.151 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.151 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.177 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.177 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.177 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.193 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.193 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.195 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.199486 containerd[1603]: 2025-08-13 00:43:03.197 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.199486 containerd[1603]: time="2025-08-13T00:43:03.199344332Z" level=info msg="TearDown network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" successfully" Aug 13 00:43:03.199486 containerd[1603]: time="2025-08-13T00:43:03.199371572Z" level=info msg="StopPodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" returns successfully" Aug 13 00:43:03.200992 containerd[1603]: time="2025-08-13T00:43:03.200560546Z" level=info msg="RemovePodSandbox for \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" Aug 13 00:43:03.200992 containerd[1603]: time="2025-08-13T00:43:03.200609506Z" level=info msg="Forcibly stopping sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\"" Aug 13 00:43:03.270423 kubelet[2732]: I0813 00:43:03.269567 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cldrb" podStartSLOduration=26.066935647 podStartE2EDuration="38.269542909s" podCreationTimestamp="2025-08-13 00:42:25 +0000 UTC" firstStartedPulling="2025-08-13 00:42:50.463983676 +0000 UTC m=+48.786582827" lastFinishedPulling="2025-08-13 00:43:02.666590978 +0000 UTC m=+60.989190089" observedRunningTime="2025-08-13 00:43:03.268894941 +0000 UTC m=+61.591494052" watchObservedRunningTime="2025-08-13 00:43:03.269542909 +0000 UTC m=+61.592142060" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.260 [WARNING][5716] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"1beb254b-638d-4817-98ac-f5a8ad60ec6e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"ea0a5b1f34bcb5083d105a8f5af13c1e549ee7c137365f30f0cd0a8e355e985c", Pod:"calico-apiserver-6fcb999d87-6t872", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif48def4e5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.261 [INFO][5716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.261 [INFO][5716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" iface="eth0" netns="" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.261 [INFO][5716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.261 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.299 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.300 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.300 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.311 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.311 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" HandleID="k8s-pod-network.201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--6t872-eth0" Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.314 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.317920 containerd[1603]: 2025-08-13 00:43:03.315 [INFO][5716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532" Aug 13 00:43:03.318629 containerd[1603]: time="2025-08-13T00:43:03.318515050Z" level=info msg="TearDown network for sandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" successfully" Aug 13 00:43:03.324024 containerd[1603]: time="2025-08-13T00:43:03.323950190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:03.324168 containerd[1603]: time="2025-08-13T00:43:03.324058992Z" level=info msg="RemovePodSandbox \"201ba90b7c2ce472c0e40f17f8df1a5536f300853682883a70a98c3af762d532\" returns successfully" Aug 13 00:43:03.324947 containerd[1603]: time="2025-08-13T00:43:03.324686399Z" level=info msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.376 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"090fd80a-e98c-47af-a53e-06165e3cc066", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24", Pod:"goldmane-58fd7646b9-844tr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a8968ffdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.376 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.376 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" iface="eth0" netns="" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.376 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.376 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.413 [INFO][5751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.413 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.413 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.433 [WARNING][5751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.433 [INFO][5751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.435 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.444524 containerd[1603]: 2025-08-13 00:43:03.440 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.445118 containerd[1603]: time="2025-08-13T00:43:03.444631045Z" level=info msg="TearDown network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" successfully" Aug 13 00:43:03.445118 containerd[1603]: time="2025-08-13T00:43:03.444673846Z" level=info msg="StopPodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" returns successfully" Aug 13 00:43:03.446370 containerd[1603]: time="2025-08-13T00:43:03.446334224Z" level=info msg="RemovePodSandbox for \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" Aug 13 00:43:03.446507 containerd[1603]: time="2025-08-13T00:43:03.446480466Z" level=info msg="Forcibly stopping sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\"" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.521 [WARNING][5807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"090fd80a-e98c-47af-a53e-06165e3cc066", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"3cdca9ff8583e8d04f0b7a34d0fedaa40a93d9640329289b4f45793c12808e24", Pod:"goldmane-58fd7646b9-844tr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a8968ffdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.524 [INFO][5807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.525 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" iface="eth0" netns="" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.525 [INFO][5807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.526 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.578 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.578 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.578 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.589 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.589 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" HandleID="k8s-pod-network.60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Workload="ci--4081--3--5--c--674096e178-k8s-goldmane--58fd7646b9--844tr-eth0" Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.592 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.598255 containerd[1603]: 2025-08-13 00:43:03.595 [INFO][5807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24" Aug 13 00:43:03.599744 containerd[1603]: time="2025-08-13T00:43:03.599039433Z" level=info msg="TearDown network for sandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" successfully" Aug 13 00:43:03.605897 containerd[1603]: time="2025-08-13T00:43:03.605823268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:03.606018 containerd[1603]: time="2025-08-13T00:43:03.605921629Z" level=info msg="RemovePodSandbox \"60024a7737948852db4f7a0298222c4cfa5b9496022e31a79c70d80b0e499a24\" returns successfully" Aug 13 00:43:03.606833 containerd[1603]: time="2025-08-13T00:43:03.606444675Z" level=info msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.647 [WARNING][5833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df81e4e-8fb8-429d-9966-a87b9cc013c8", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379", Pod:"calico-apiserver-6fcb999d87-pw4vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae571d036be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.647 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.647 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" iface="eth0" netns="" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.647 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.647 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.670 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.670 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.670 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.680 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.680 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.685 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.689962 containerd[1603]: 2025-08-13 00:43:03.687 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.690569 containerd[1603]: time="2025-08-13T00:43:03.690004640Z" level=info msg="TearDown network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" successfully" Aug 13 00:43:03.690569 containerd[1603]: time="2025-08-13T00:43:03.690090560Z" level=info msg="StopPodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" returns successfully" Aug 13 00:43:03.690662 containerd[1603]: time="2025-08-13T00:43:03.690593646Z" level=info msg="RemovePodSandbox for \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" Aug 13 00:43:03.690662 containerd[1603]: time="2025-08-13T00:43:03.690626286Z" level=info msg="Forcibly stopping sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\"" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.732 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0", GenerateName:"calico-apiserver-6fcb999d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df81e4e-8fb8-429d-9966-a87b9cc013c8", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb999d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-c-674096e178", ContainerID:"d4ecdc60dc934f535823330709305086b38594ebf41cdc0fbeee9b1db4c71379", Pod:"calico-apiserver-6fcb999d87-pw4vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae571d036be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.732 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.732 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" iface="eth0" netns="" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.732 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.732 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.754 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.754 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.755 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.768 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.768 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" HandleID="k8s-pod-network.4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Workload="ci--4081--3--5--c--674096e178-k8s-calico--apiserver--6fcb999d87--pw4vs-eth0" Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.770 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:03.775685 containerd[1603]: 2025-08-13 00:43:03.773 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5" Aug 13 00:43:03.775685 containerd[1603]: time="2025-08-13T00:43:03.775689707Z" level=info msg="TearDown network for sandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" successfully" Aug 13 00:43:03.779651 containerd[1603]: time="2025-08-13T00:43:03.779569710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:43:03.779927 containerd[1603]: time="2025-08-13T00:43:03.779710432Z" level=info msg="RemovePodSandbox \"4be37e3b5f0d600223eaa37c8d3bc518c07d9627c53b77ee0f210a4d32dd56c5\" returns successfully" Aug 13 00:43:09.087264 systemd[1]: Started sshd@8-91.99.159.132:22-103.203.57.11:47294.service - OpenSSH per-connection server daemon (103.203.57.11:47294). Aug 13 00:43:09.191137 sshd[5892]: Connection closed by 103.203.57.11 port 47294 Aug 13 00:43:09.193761 systemd[1]: sshd@8-91.99.159.132:22-103.203.57.11:47294.service: Deactivated successfully. Aug 13 00:43:21.466348 kubelet[2732]: I0813 00:43:21.464314 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:59.865180 systemd[1]: Started sshd@9-91.99.159.132:22-139.178.89.65:34310.service - OpenSSH per-connection server daemon (139.178.89.65:34310). Aug 13 00:44:00.868443 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 34310 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:00.869668 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:00.877773 systemd-logind[1581]: New session 8 of user core. Aug 13 00:44:00.885289 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:44:01.831111 sshd[6014]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:01.845677 systemd[1]: sshd@9-91.99.159.132:22-139.178.89.65:34310.service: Deactivated successfully. Aug 13 00:44:01.852808 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:44:01.861101 systemd-logind[1581]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:44:01.864242 systemd-logind[1581]: Removed session 8. Aug 13 00:44:03.422683 systemd[1]: run-containerd-runc-k8s.io-9ed8f66a8b025aeccf90cc8c2bd23493706113fb98cbdcc3ede7895212aafaa5-runc.Z33mly.mount: Deactivated successfully. Aug 13 00:44:06.996743 systemd[1]: Started sshd@10-91.99.159.132:22-139.178.89.65:34320.service - OpenSSH per-connection server daemon (139.178.89.65:34320). Aug 13 00:44:08.002412 sshd[6099]: Accepted publickey for core from 139.178.89.65 port 34320 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:08.007039 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:08.013375 systemd-logind[1581]: New session 9 of user core. Aug 13 00:44:08.023547 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:44:08.824269 sshd[6099]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:08.830136 systemd[1]: sshd@10-91.99.159.132:22-139.178.89.65:34320.service: Deactivated successfully. Aug 13 00:44:08.835718 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:44:08.836977 systemd-logind[1581]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:44:08.838364 systemd-logind[1581]: Removed session 9. Aug 13 00:44:09.001561 systemd[1]: Started sshd@11-91.99.159.132:22-139.178.89.65:34328.service - OpenSSH per-connection server daemon (139.178.89.65:34328). Aug 13 00:44:09.992153 sshd[6116]: Accepted publickey for core from 139.178.89.65 port 34328 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:09.994146 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:09.999073 systemd-logind[1581]: New session 10 of user core. Aug 13 00:44:10.006651 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:44:10.822316 sshd[6116]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:10.830435 systemd-logind[1581]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:44:10.831290 systemd[1]: sshd@11-91.99.159.132:22-139.178.89.65:34328.service: Deactivated successfully. Aug 13 00:44:10.837754 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:44:10.844028 systemd-logind[1581]: Removed session 10. Aug 13 00:44:11.010262 systemd[1]: Started sshd@12-91.99.159.132:22-139.178.89.65:35366.service - OpenSSH per-connection server daemon (139.178.89.65:35366). Aug 13 00:44:12.071088 sshd[6130]: Accepted publickey for core from 139.178.89.65 port 35366 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:12.073481 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:12.081120 systemd-logind[1581]: New session 11 of user core. Aug 13 00:44:12.084319 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:44:12.979697 sshd[6130]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:12.986674 systemd[1]: sshd@12-91.99.159.132:22-139.178.89.65:35366.service: Deactivated successfully. Aug 13 00:44:12.991736 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:44:12.994190 systemd-logind[1581]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:44:12.995309 systemd-logind[1581]: Removed session 11. Aug 13 00:44:18.140318 systemd[1]: Started sshd@13-91.99.159.132:22-139.178.89.65:35374.service - OpenSSH per-connection server daemon (139.178.89.65:35374). Aug 13 00:44:19.131252 sshd[6147]: Accepted publickey for core from 139.178.89.65 port 35374 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:19.133555 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:19.139225 systemd-logind[1581]: New session 12 of user core. Aug 13 00:44:19.147283 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:44:19.898231 sshd[6147]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:19.902526 systemd[1]: sshd@13-91.99.159.132:22-139.178.89.65:35374.service: Deactivated successfully. Aug 13 00:44:19.906023 systemd-logind[1581]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:44:19.907195 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:44:19.909110 systemd-logind[1581]: Removed session 12. Aug 13 00:44:25.066277 systemd[1]: Started sshd@14-91.99.159.132:22-139.178.89.65:43340.service - OpenSSH per-connection server daemon (139.178.89.65:43340). Aug 13 00:44:26.061060 sshd[6182]: Accepted publickey for core from 139.178.89.65 port 43340 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:26.062866 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:26.069345 systemd-logind[1581]: New session 13 of user core. Aug 13 00:44:26.074372 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:44:26.858258 sshd[6182]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:26.863872 systemd[1]: sshd@14-91.99.159.132:22-139.178.89.65:43340.service: Deactivated successfully. Aug 13 00:44:26.868042 systemd-logind[1581]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:44:26.868646 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:44:26.870484 systemd-logind[1581]: Removed session 13. Aug 13 00:44:32.032931 systemd[1]: Started sshd@15-91.99.159.132:22-139.178.89.65:58412.service - OpenSSH per-connection server daemon (139.178.89.65:58412). Aug 13 00:44:33.039576 sshd[6195]: Accepted publickey for core from 139.178.89.65 port 58412 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:33.043092 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:33.049318 systemd-logind[1581]: New session 14 of user core. Aug 13 00:44:33.058147 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:44:33.418597 systemd[1]: run-containerd-runc-k8s.io-9ed8f66a8b025aeccf90cc8c2bd23493706113fb98cbdcc3ede7895212aafaa5-runc.a9y5tO.mount: Deactivated successfully. Aug 13 00:44:33.873445 sshd[6195]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:33.881185 systemd-logind[1581]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:44:33.883850 systemd[1]: sshd@15-91.99.159.132:22-139.178.89.65:58412.service: Deactivated successfully. Aug 13 00:44:33.888842 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:44:33.892474 systemd-logind[1581]: Removed session 14. Aug 13 00:44:34.046690 systemd[1]: Started sshd@16-91.99.159.132:22-139.178.89.65:58420.service - OpenSSH per-connection server daemon (139.178.89.65:58420). Aug 13 00:44:35.059752 sshd[6248]: Accepted publickey for core from 139.178.89.65 port 58420 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:35.062712 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:35.070777 systemd-logind[1581]: New session 15 of user core. Aug 13 00:44:35.077051 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:44:36.047129 sshd[6248]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:36.053355 systemd[1]: sshd@16-91.99.159.132:22-139.178.89.65:58420.service: Deactivated successfully. Aug 13 00:44:36.057539 systemd-logind[1581]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:44:36.058168 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:44:36.059824 systemd-logind[1581]: Removed session 15. Aug 13 00:44:36.216238 systemd[1]: Started sshd@17-91.99.159.132:22-139.178.89.65:58428.service - OpenSSH per-connection server daemon (139.178.89.65:58428). Aug 13 00:44:37.218795 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 58428 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:37.220931 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:37.228845 systemd-logind[1581]: New session 16 of user core. Aug 13 00:44:37.234353 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:44:39.797238 sshd[6260]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:39.804932 systemd[1]: sshd@17-91.99.159.132:22-139.178.89.65:58428.service: Deactivated successfully. Aug 13 00:44:39.810806 systemd-logind[1581]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:44:39.813077 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:44:39.816271 systemd-logind[1581]: Removed session 16. Aug 13 00:44:39.966471 systemd[1]: Started sshd@18-91.99.159.132:22-139.178.89.65:41926.service - OpenSSH per-connection server daemon (139.178.89.65:41926). Aug 13 00:44:40.980898 sshd[6308]: Accepted publickey for core from 139.178.89.65 port 41926 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:40.981833 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:40.989574 systemd-logind[1581]: New session 17 of user core. Aug 13 00:44:40.994450 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:44:42.198198 sshd[6308]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:42.204546 systemd-logind[1581]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:44:42.206118 systemd[1]: sshd@18-91.99.159.132:22-139.178.89.65:41926.service: Deactivated successfully. Aug 13 00:44:42.211585 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:44:42.216329 systemd-logind[1581]: Removed session 17. Aug 13 00:44:42.370157 systemd[1]: Started sshd@19-91.99.159.132:22-139.178.89.65:41936.service - OpenSSH per-connection server daemon (139.178.89.65:41936). Aug 13 00:44:43.361938 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 41936 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:43.363924 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:43.372319 systemd-logind[1581]: New session 18 of user core. Aug 13 00:44:43.380164 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:44:44.180266 sshd[6320]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:44.187474 systemd[1]: sshd@19-91.99.159.132:22-139.178.89.65:41936.service: Deactivated successfully. Aug 13 00:44:44.195673 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:44:44.199478 systemd-logind[1581]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:44:44.201568 systemd-logind[1581]: Removed session 18. Aug 13 00:44:49.378456 systemd[1]: Started sshd@20-91.99.159.132:22-139.178.89.65:59844.service - OpenSSH per-connection server daemon (139.178.89.65:59844). Aug 13 00:44:50.429783 sshd[6337]: Accepted publickey for core from 139.178.89.65 port 59844 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:50.432083 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:50.438200 systemd-logind[1581]: New session 19 of user core. Aug 13 00:44:50.442177 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:44:51.254199 sshd[6337]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:51.258440 systemd[1]: sshd@20-91.99.159.132:22-139.178.89.65:59844.service: Deactivated successfully. Aug 13 00:44:51.262790 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:44:51.266043 systemd-logind[1581]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:44:51.267395 systemd-logind[1581]: Removed session 19. Aug 13 00:44:56.417346 systemd[1]: Started sshd@21-91.99.159.132:22-139.178.89.65:59860.service - OpenSSH per-connection server daemon (139.178.89.65:59860). Aug 13 00:44:57.406692 sshd[6370]: Accepted publickey for core from 139.178.89.65 port 59860 ssh2: RSA SHA256:9e2Hg8u+nSxXYAkzcQw5pk/rbleMVV68OvZer8oiL8w Aug 13 00:44:57.409260 sshd[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:57.414640 systemd-logind[1581]: New session 20 of user core. Aug 13 00:44:57.420171 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:44:58.196263 sshd[6370]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:58.202204 systemd[1]: sshd@21-91.99.159.132:22-139.178.89.65:59860.service: Deactivated successfully. Aug 13 00:44:58.207781 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:44:58.209416 systemd-logind[1581]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:44:58.211204 systemd-logind[1581]: Removed session 20.