Sep 13 00:06:44.867049 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:06:44.867069 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:44.867076 kernel: BIOS-provided physical RAM map: Sep 13 00:06:44.867081 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:06:44.867086 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:06:44.867090 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:06:44.867096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 13 00:06:44.867100 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 13 00:06:44.867106 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:06:44.867111 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:06:44.867115 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:06:44.867120 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:06:44.867124 kernel: NX (Execute Disable) protection: active Sep 13 00:06:44.867129 kernel: APIC: Static calls initialized Sep 13 00:06:44.867135 kernel: SMBIOS 2.8 present. Sep 13 00:06:44.867141 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 13 00:06:44.867146 kernel: Hypervisor detected: KVM Sep 13 00:06:44.867150 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:06:44.867155 kernel: kvm-clock: using sched offset of 2920800419 cycles Sep 13 00:06:44.867160 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:06:44.867166 kernel: tsc: Detected 2445.406 MHz processor Sep 13 00:06:44.867171 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:06:44.867177 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:06:44.867183 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 13 00:06:44.867188 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:06:44.867193 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:06:44.867198 kernel: Using GB pages for direct mapping Sep 13 00:06:44.867203 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:44.867208 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 13 00:06:44.867213 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867218 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867223 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867229 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 13 00:06:44.867234 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867239 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867244 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867249 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:44.867254 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 13 00:06:44.867259 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 13 00:06:44.867264 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 13 00:06:44.867272 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 13 00:06:44.867278 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 13 00:06:44.867283 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 13 00:06:44.867288 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 13 00:06:44.867293 kernel: No NUMA configuration found Sep 13 00:06:44.867299 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 13 00:06:44.867305 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 13 00:06:44.867310 kernel: Zone ranges: Sep 13 00:06:44.867315 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:06:44.867321 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 13 00:06:44.867326 kernel: Normal empty Sep 13 00:06:44.867331 kernel: Movable zone start for each node Sep 13 00:06:44.867336 kernel: Early memory node ranges Sep 13 00:06:44.867341 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:06:44.867346 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 13 00:06:44.867351 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 13 00:06:44.867358 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:06:44.867363 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:06:44.867369 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:06:44.867374 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:06:44.867380 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:06:44.867385 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:06:44.867390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:06:44.867395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:06:44.867400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:06:44.867407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:06:44.867412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:06:44.867417 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:06:44.867422 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:06:44.867428 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:06:44.867433 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:06:44.867438 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:06:44.867443 kernel: Booting paravirtualized kernel on KVM Sep 13 00:06:44.867449 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:06:44.867455 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:06:44.867460 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:06:44.867466 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:06:44.867471 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:06:44.867476 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:06:44.867482 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:44.867488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:44.867494 kernel: random: crng init done Sep 13 00:06:44.867500 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:44.867505 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:06:44.867511 kernel: Fallback order for Node 0: 0 Sep 13 00:06:44.867516 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 13 00:06:44.867521 kernel: Policy zone: DMA32 Sep 13 00:06:44.867526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:44.867532 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:06:44.867537 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:06:44.867543 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:06:44.867549 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:06:44.867554 kernel: Dynamic Preempt: voluntary Sep 13 00:06:44.867559 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:44.867567 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:44.867573 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:06:44.867578 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:44.867584 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:06:44.867589 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:44.867594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:44.867600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:06:44.867606 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:06:44.867626 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:06:44.867631 kernel: Console: colour VGA+ 80x25 Sep 13 00:06:44.867636 kernel: printk: console [tty0] enabled Sep 13 00:06:44.867642 kernel: printk: console [ttyS0] enabled Sep 13 00:06:44.867647 kernel: ACPI: Core revision 20230628 Sep 13 00:06:44.867652 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:06:44.867658 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:06:44.867663 kernel: x2apic enabled Sep 13 00:06:44.867670 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:06:44.867675 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:06:44.867680 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:06:44.867686 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Sep 13 00:06:44.867691 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:06:44.867696 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:06:44.867702 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:06:44.867707 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:06:44.867718 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:06:44.867723 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:06:44.867729 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:06:44.867734 kernel: active return thunk: retbleed_return_thunk Sep 13 00:06:44.867741 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:06:44.867747 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:06:44.867752 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:06:44.867758 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:06:44.867764 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:06:44.867770 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:06:44.867776 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:06:44.867782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:06:44.867787 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:06:44.867793 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:44.867798 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:06:44.867804 kernel: landlock: Up and running. Sep 13 00:06:44.867809 kernel: SELinux: Initializing. Sep 13 00:06:44.867816 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:44.867822 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:44.867827 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:06:44.867833 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:44.867839 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:44.867844 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:44.867877 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:06:44.867882 kernel: ... version: 0 Sep 13 00:06:44.867888 kernel: ... bit width: 48 Sep 13 00:06:44.867896 kernel: ... generic registers: 6 Sep 13 00:06:44.867901 kernel: ... value mask: 0000ffffffffffff Sep 13 00:06:44.867907 kernel: ... max period: 00007fffffffffff Sep 13 00:06:44.868018 kernel: ... fixed-purpose events: 0 Sep 13 00:06:44.868024 kernel: ... event mask: 000000000000003f Sep 13 00:06:44.868029 kernel: signal: max sigframe size: 1776 Sep 13 00:06:44.868035 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:44.868041 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:06:44.868047 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:44.868054 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:06:44.868060 kernel: .... node #0, CPUs: #1 Sep 13 00:06:44.868065 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:06:44.868071 kernel: smpboot: Max logical packages: 1 Sep 13 00:06:44.868076 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Sep 13 00:06:44.868082 kernel: devtmpfs: initialized Sep 13 00:06:44.868087 kernel: x86/mm: Memory block size: 128MB Sep 13 00:06:44.868093 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:44.868099 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:06:44.868104 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:44.868111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:44.868117 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:44.868123 kernel: audit: type=2000 audit(1757722004.282:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:44.868128 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:44.868134 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:06:44.868139 kernel: cpuidle: using governor menu Sep 13 00:06:44.868145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:44.868150 kernel: dca service started, version 1.12.1 Sep 13 00:06:44.868156 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:06:44.868163 kernel: PCI: Using configuration type 1 for base access Sep 13 00:06:44.868168 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:06:44.868174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:06:44.868180 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:06:44.868185 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:44.868191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:06:44.868197 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:44.868202 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:44.868208 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:44.868215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:06:44.868220 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:06:44.868226 kernel: ACPI: Interpreter enabled Sep 13 00:06:44.868231 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:06:44.868237 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:06:44.868243 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:06:44.868248 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:06:44.868254 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:06:44.868259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:44.868380 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:44.868452 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:06:44.868514 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:06:44.868523 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:44.868589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:06:44.868662 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:06:44.868724 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:06:44.868778 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 13 00:06:44.868832 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:06:44.869221 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:06:44.869283 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:44.869362 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:06:44.869435 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:06:44.869505 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 13 00:06:44.869641 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 13 00:06:44.869710 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 13 00:06:44.869774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 13 00:06:44.869839 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:06:44.870531 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.870609 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 13 00:06:44.870698 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.870763 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 13 00:06:44.870832 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.870949 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 13 00:06:44.871025 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871093 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 13 00:06:44.871161 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871223 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 13 00:06:44.871290 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871352 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 13 00:06:44.871419 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871486 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 13 00:06:44.871552 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871628 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 13 00:06:44.871698 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:06:44.871761 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 13 00:06:44.871828 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:06:44.872164 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:06:44.872247 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:06:44.872311 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 13 00:06:44.872374 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 13 00:06:44.872441 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:06:44.872503 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:06:44.872579 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:06:44.872668 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 13 00:06:44.872734 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 00:06:44.872797 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 13 00:06:44.872929 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:06:44.873000 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:06:44.873063 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:06:44.873134 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:06:44.873205 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 13 00:06:44.873268 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:06:44.873329 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:06:44.873391 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:06:44.873461 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:06:44.873526 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 13 00:06:44.873595 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 13 00:06:44.873676 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:06:44.873740 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:06:44.873801 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:06:44.873905 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:06:44.873974 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 00:06:44.874035 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:06:44.874095 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:06:44.874160 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:06:44.874230 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:06:44.874295 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 13 00:06:44.874357 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:06:44.874418 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:06:44.874478 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:06:44.874551 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:06:44.874638 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 13 00:06:44.874704 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 13 00:06:44.874767 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:06:44.874827 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:06:44.874921 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:06:44.874930 kernel: acpiphp: Slot [0] registered Sep 13 00:06:44.875001 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:06:44.875073 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 13 00:06:44.875136 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 13 00:06:44.875199 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 13 00:06:44.875261 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:06:44.875322 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:06:44.875382 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:06:44.875391 kernel: acpiphp: Slot [0-2] registered Sep 13 00:06:44.875451 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:06:44.875516 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:06:44.875576 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:06:44.875585 kernel: acpiphp: Slot [0-3] registered Sep 13 00:06:44.875664 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:06:44.875727 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:06:44.875788 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:06:44.875797 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:06:44.875803 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:06:44.875808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:06:44.875817 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:06:44.875822 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:06:44.875828 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:06:44.875834 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:06:44.875840 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:06:44.875846 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:06:44.875899 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:06:44.875905 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:06:44.875911 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:06:44.875919 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:06:44.875925 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:06:44.875931 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:06:44.875937 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:06:44.875942 kernel: iommu: Default domain type: Translated Sep 13 00:06:44.875948 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:06:44.875954 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:06:44.875959 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:06:44.875965 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:06:44.875973 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 13 00:06:44.876043 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:06:44.876105 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:06:44.876165 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:06:44.876173 kernel: vgaarb: loaded Sep 13 00:06:44.876179 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:06:44.876185 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:06:44.876191 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:06:44.876197 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:44.876205 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:44.876211 kernel: pnp: PnP ACPI init Sep 13 00:06:44.876283 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:06:44.876292 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:06:44.876299 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:06:44.876304 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:44.876310 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:44.876316 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:06:44.876325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:44.876331 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:06:44.876336 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:06:44.876342 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:06:44.876348 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:44.876354 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:44.876360 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:44.876365 kernel: NET: Registered PF_XDP protocol family Sep 13 00:06:44.876428 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:06:44.876495 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:06:44.876556 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:06:44.876637 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:06:44.876702 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:06:44.876764 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:06:44.876826 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:06:44.876925 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:06:44.876990 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:06:44.877050 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:06:44.877111 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:06:44.877171 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:06:44.877231 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:06:44.877292 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:06:44.877352 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:06:44.877412 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:06:44.877485 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:06:44.877574 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:06:44.877676 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:06:44.877740 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:06:44.877801 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:06:44.878523 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:06:44.878605 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:06:44.878698 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:06:44.878766 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:06:44.878827 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 13 00:06:44.878908 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:06:44.878972 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:06:44.879051 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:06:44.879112 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 13 00:06:44.879173 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:06:44.879234 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:06:44.879294 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:06:44.879360 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 13 00:06:44.879421 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:06:44.879482 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:06:44.879547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:06:44.879603 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:06:44.879676 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:06:44.879731 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 13 00:06:44.879784 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:06:44.879839 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:06:44.879975 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 00:06:44.880041 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:06:44.880105 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 00:06:44.880162 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:06:44.880226 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 00:06:44.880282 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:06:44.880344 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 00:06:44.880406 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:06:44.882937 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 00:06:44.883008 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:06:44.883074 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 00:06:44.883132 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:06:44.883194 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 13 00:06:44.883257 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 00:06:44.883312 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:06:44.883378 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 13 00:06:44.883434 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 13 00:06:44.883489 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:06:44.883551 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 13 00:06:44.883628 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 00:06:44.883688 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:06:44.883697 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:06:44.883704 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:44.883710 kernel: Initialise system trusted keyrings Sep 13 00:06:44.883716 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:06:44.883722 kernel: Key type asymmetric registered Sep 13 00:06:44.883728 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:44.883734 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:06:44.883744 kernel: io scheduler mq-deadline registered Sep 13 00:06:44.883750 kernel: io scheduler kyber registered Sep 13 00:06:44.883756 kernel: io scheduler bfq registered Sep 13 00:06:44.883821 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 00:06:44.884597 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 00:06:44.884690 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 00:06:44.884756 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 00:06:44.884824 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 00:06:44.884931 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 00:06:44.885004 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 00:06:44.885068 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 00:06:44.885130 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 00:06:44.885192 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 00:06:44.885254 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 00:06:44.885315 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 00:06:44.885377 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 00:06:44.885440 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 00:06:44.885578 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 00:06:44.885684 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 00:06:44.885694 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:06:44.885783 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 13 00:06:44.885902 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 13 00:06:44.885914 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:06:44.885921 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 13 00:06:44.885927 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:44.885933 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:06:44.885943 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:06:44.885949 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:06:44.885955 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:06:44.886026 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:06:44.886036 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:06:44.886093 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:06:44.886151 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:06:44 UTC (1757722004) Sep 13 00:06:44.886213 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:06:44.886221 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:06:44.886228 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:44.886235 kernel: Segment Routing with IPv6 Sep 13 00:06:44.886241 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:44.886247 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:44.886253 kernel: Key type dns_resolver registered Sep 13 00:06:44.886259 kernel: IPI shorthand broadcast: enabled Sep 13 00:06:44.886265 kernel: sched_clock: Marking stable (1101009664, 132784675)->(1244061212, -10266873) Sep 13 00:06:44.886274 kernel: registered taskstats version 1 Sep 13 00:06:44.886280 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:44.886286 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:06:44.886292 kernel: Key type .fscrypt registered Sep 13 00:06:44.886298 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:44.886304 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:44.886310 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:44.886317 kernel: ima: No architecture policies found Sep 13 00:06:44.886324 kernel: clk: Disabling unused clocks Sep 13 00:06:44.886332 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:06:44.886338 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:06:44.886344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:06:44.886350 kernel: Run /init as init process Sep 13 00:06:44.886356 kernel: with arguments: Sep 13 00:06:44.886363 kernel: /init Sep 13 00:06:44.886369 kernel: with environment: Sep 13 00:06:44.886375 kernel: HOME=/ Sep 13 00:06:44.886381 kernel: TERM=linux Sep 13 00:06:44.886388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:44.886396 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:44.886404 systemd[1]: Detected virtualization kvm. Sep 13 00:06:44.886411 systemd[1]: Detected architecture x86-64. Sep 13 00:06:44.886417 systemd[1]: Running in initrd. Sep 13 00:06:44.886424 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:44.886431 systemd[1]: Hostname set to . Sep 13 00:06:44.886439 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:44.886445 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:44.886452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:44.886458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:44.886465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:06:44.886472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:44.886478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:06:44.886485 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:06:44.886494 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:06:44.886500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:06:44.886507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:44.886514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:44.886520 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:44.886526 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:44.886533 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:44.886541 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:44.886547 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:44.886553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:44.886560 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:44.886566 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:44.886573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:44.886579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:44.886586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:44.886592 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:44.886601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:06:44.886607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:44.886629 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:06:44.886635 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:44.886642 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:44.886648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:44.886654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:44.886677 systemd-journald[187]: Collecting audit messages is disabled. Sep 13 00:06:44.886696 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:44.886704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:44.886710 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:44.886719 systemd-journald[187]: Journal started Sep 13 00:06:44.886751 systemd-journald[187]: Runtime Journal (/run/log/journal/24131e2a6e9a46eab2c50cb67e0bdba3) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:06:44.885835 systemd-modules-load[188]: Inserted module 'overlay' Sep 13 00:06:44.890275 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:44.906867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:44.906925 kernel: Bridge firewalling registered Sep 13 00:06:44.906620 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 13 00:06:44.931381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:44.932631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:44.940011 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:44.943002 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:44.944160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:44.948003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:44.959448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:44.961051 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:44.970347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:44.971966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:44.973325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:44.974734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:44.981134 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:06:44.984965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:44.988887 dracut-cmdline[223]: dracut-dracut-053 Sep 13 00:06:44.992390 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:45.010534 systemd-resolved[224]: Positive Trust Anchors: Sep 13 00:06:45.010547 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:45.010572 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:45.019335 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 13 00:06:45.020249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:45.020988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:45.052900 kernel: SCSI subsystem initialized Sep 13 00:06:45.060890 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:45.071888 kernel: iscsi: registered transport (tcp) Sep 13 00:06:45.089026 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:45.089103 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:45.124017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:45.130994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:06:45.154592 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:45.154671 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:45.154683 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:06:45.196894 kernel: raid6: avx2x4 gen() 34769 MB/s Sep 13 00:06:45.213913 kernel: raid6: avx2x2 gen() 32160 MB/s Sep 13 00:06:45.231029 kernel: raid6: avx2x1 gen() 26513 MB/s Sep 13 00:06:45.231094 kernel: raid6: using algorithm avx2x4 gen() 34769 MB/s Sep 13 00:06:45.249101 kernel: raid6: .... xor() 4582 MB/s, rmw enabled Sep 13 00:06:45.249176 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:06:45.273898 kernel: xor: automatically using best checksumming function avx Sep 13 00:06:45.386895 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:06:45.396685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:45.402987 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:45.411570 systemd-udevd[407]: Using default interface naming scheme 'v255'. Sep 13 00:06:45.414427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:45.422051 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:06:45.432494 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Sep 13 00:06:45.457124 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:45.462978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:45.498562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:45.505134 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:06:45.512651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:45.516414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:45.517713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:45.518162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:45.524148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:06:45.536039 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:45.586655 kernel: ACPI: bus type USB registered Sep 13 00:06:45.586723 kernel: usbcore: registered new interface driver usbfs Sep 13 00:06:45.587749 kernel: usbcore: registered new interface driver hub Sep 13 00:06:45.592870 kernel: usbcore: registered new device driver usb Sep 13 00:06:45.592900 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:06:45.601317 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:06:45.638842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:45.641650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:45.642790 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:06:45.643254 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:45.644966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:45.645230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:45.654109 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:06:45.654169 kernel: AES CTR mode by8 optimization enabled Sep 13 00:06:45.646204 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:45.655707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:45.665104 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:06:45.665265 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:06:45.665355 kernel: libata version 3.00 loaded. Sep 13 00:06:45.669155 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:06:45.674291 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:06:45.674489 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:06:45.676883 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:06:45.681652 kernel: hub 1-0:1.0: USB hub found Sep 13 00:06:45.685986 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:06:45.687310 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:06:45.688076 kernel: hub 2-0:1.0: USB hub found Sep 13 00:06:45.688237 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:06:45.713914 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:06:45.715873 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:06:45.715905 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:06:45.716018 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:06:45.716885 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 13 00:06:45.717020 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:06:45.717108 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:06:45.717187 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 13 00:06:45.717265 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:06:45.720128 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:45.720147 kernel: GPT:17805311 != 80003071 Sep 13 00:06:45.720155 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:45.720168 kernel: GPT:17805311 != 80003071 Sep 13 00:06:45.720175 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:45.720182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:06:45.720189 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:06:45.720956 kernel: scsi host1: ahci Sep 13 00:06:45.724919 kernel: scsi host2: ahci Sep 13 00:06:45.726909 kernel: scsi host3: ahci Sep 13 00:06:45.729872 kernel: scsi host4: ahci Sep 13 00:06:45.733867 kernel: scsi host5: ahci Sep 13 00:06:45.736030 kernel: scsi host6: ahci Sep 13 00:06:45.736136 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Sep 13 00:06:45.736146 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Sep 13 00:06:45.736154 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Sep 13 00:06:45.736161 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Sep 13 00:06:45.736169 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Sep 13 00:06:45.736179 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Sep 13 00:06:45.756092 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (460) Sep 13 00:06:45.773623 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:06:45.793917 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Sep 13 00:06:45.794277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:45.799451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:06:45.803496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:06:45.804194 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:06:45.809834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:06:45.819044 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:06:45.822978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:45.830201 disk-uuid[568]: Primary Header is updated. Sep 13 00:06:45.830201 disk-uuid[568]: Secondary Entries is updated. Sep 13 00:06:45.830201 disk-uuid[568]: Secondary Header is updated. Sep 13 00:06:45.838540 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:06:45.839720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:45.844308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:06:45.924073 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:06:46.051408 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:06:46.051484 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:06:46.051495 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:06:46.051505 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:06:46.051513 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:06:46.054653 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:06:46.054699 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:06:46.056993 kernel: ata1.00: applying bridge limits Sep 13 00:06:46.057122 kernel: ata1.00: configured for UDMA/100 Sep 13 00:06:46.059074 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:06:46.065888 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:06:46.078917 kernel: usbcore: registered new interface driver usbhid Sep 13 00:06:46.078967 kernel: usbhid: USB HID core driver Sep 13 00:06:46.084883 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 00:06:46.084931 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:06:46.103535 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:06:46.103822 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:06:46.114886 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:06:46.853886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:06:46.855915 disk-uuid[574]: The operation has completed successfully. Sep 13 00:06:46.918614 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:46.918789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:06:46.941063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:06:46.943676 sh[599]: Success Sep 13 00:06:46.956934 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:06:47.006641 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:06:47.014613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:06:47.015506 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:06:47.033896 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:06:47.033951 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:47.037260 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:06:47.037310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:06:47.038776 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:06:47.049897 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:06:47.051671 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:06:47.052754 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:06:47.061036 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:06:47.065011 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:06:47.080799 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:47.080881 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:47.080908 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:06:47.087110 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:06:47.087153 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:06:47.100038 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:47.103326 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:47.110183 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:06:47.121088 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:06:47.192553 ignition[701]: Ignition 2.19.0 Sep 13 00:06:47.192563 ignition[701]: Stage: fetch-offline Sep 13 00:06:47.193872 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:47.192590 ignition[701]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:47.192597 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:47.192686 ignition[701]: parsed url from cmdline: "" Sep 13 00:06:47.196791 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:47.192689 ignition[701]: no config URL provided Sep 13 00:06:47.192693 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:47.192699 ignition[701]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:47.192703 ignition[701]: failed to fetch config: resource requires networking Sep 13 00:06:47.192877 ignition[701]: Ignition finished successfully Sep 13 00:06:47.206033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:47.221064 systemd-networkd[786]: lo: Link UP Sep 13 00:06:47.221075 systemd-networkd[786]: lo: Gained carrier Sep 13 00:06:47.222597 systemd-networkd[786]: Enumeration completed Sep 13 00:06:47.222676 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:47.223348 systemd[1]: Reached target network.target - Network. Sep 13 00:06:47.223645 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:47.223648 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:47.224321 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:47.224324 systemd-networkd[786]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:47.225145 systemd-networkd[786]: eth0: Link UP Sep 13 00:06:47.225148 systemd-networkd[786]: eth0: Gained carrier Sep 13 00:06:47.225154 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:47.229332 systemd-networkd[786]: eth1: Link UP Sep 13 00:06:47.229335 systemd-networkd[786]: eth1: Gained carrier Sep 13 00:06:47.229341 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:47.230989 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:06:47.242403 ignition[788]: Ignition 2.19.0 Sep 13 00:06:47.242418 ignition[788]: Stage: fetch Sep 13 00:06:47.242587 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:47.242598 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:47.242688 ignition[788]: parsed url from cmdline: "" Sep 13 00:06:47.242691 ignition[788]: no config URL provided Sep 13 00:06:47.242694 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:47.242700 ignition[788]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:47.242718 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:06:47.242830 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:06:47.258931 systemd-networkd[786]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:06:47.298923 systemd-networkd[786]: eth0: DHCPv4 address 65.108.146.26/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:06:47.443072 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:06:47.446427 ignition[788]: GET result: OK Sep 13 00:06:47.446547 ignition[788]: parsing config with SHA512: 849213517736830d600822ab78d5e86c17b9d38fd6175e50d94b6d62f4698b4bd9d5de37d4ed2291dcb9465e1fa1215a6f0bd427ce950d73ef90a1e8fe362453 Sep 13 00:06:47.450746 unknown[788]: fetched base config from "system" Sep 13 00:06:47.451366 unknown[788]: fetched base config from "system" Sep 13 00:06:47.451674 ignition[788]: fetch: fetch complete Sep 13 00:06:47.451372 unknown[788]: fetched user config from "hetzner" Sep 13 00:06:47.451678 ignition[788]: fetch: fetch passed Sep 13 00:06:47.453700 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:06:47.451720 ignition[788]: Ignition finished successfully Sep 13 00:06:47.459054 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:06:47.472664 ignition[796]: Ignition 2.19.0 Sep 13 00:06:47.472676 ignition[796]: Stage: kargs Sep 13 00:06:47.472926 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:47.472937 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:47.477012 ignition[796]: kargs: kargs passed Sep 13 00:06:47.477559 ignition[796]: Ignition finished successfully Sep 13 00:06:47.478768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:06:47.487064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:06:47.499341 ignition[803]: Ignition 2.19.0 Sep 13 00:06:47.499354 ignition[803]: Stage: disks Sep 13 00:06:47.499521 ignition[803]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:47.499531 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:47.501311 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:06:47.500309 ignition[803]: disks: disks passed Sep 13 00:06:47.506731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:47.500348 ignition[803]: Ignition finished successfully Sep 13 00:06:47.508694 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:47.510002 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:47.511081 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:47.512365 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:47.520014 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:06:47.533971 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:06:47.536527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:06:47.542960 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:06:47.617880 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:06:47.618503 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:06:47.619376 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:47.637982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:47.641096 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:06:47.643936 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:06:47.646531 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:47.647730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:47.653498 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:06:47.654667 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (820) Sep 13 00:06:47.655997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:06:47.658066 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:47.658111 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:47.658125 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:06:47.673784 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:06:47.673873 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:06:47.679577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:47.709800 coreos-metadata[822]: Sep 13 00:06:47.709 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:06:47.712471 coreos-metadata[822]: Sep 13 00:06:47.711 INFO Fetch successful Sep 13 00:06:47.712471 coreos-metadata[822]: Sep 13 00:06:47.711 INFO wrote hostname ci-4081-3-5-n-8d584fda4c to /sysroot/etc/hostname Sep 13 00:06:47.715056 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:06:47.722487 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:47.726696 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:47.730157 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:47.733376 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:47.805047 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:47.810956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:06:47.814017 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:06:47.822875 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:47.843686 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:06:47.844459 ignition[937]: INFO : Ignition 2.19.0 Sep 13 00:06:47.844459 ignition[937]: INFO : Stage: mount Sep 13 00:06:47.846712 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:47.846712 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:47.846712 ignition[937]: INFO : mount: mount passed Sep 13 00:06:47.846712 ignition[937]: INFO : Ignition finished successfully Sep 13 00:06:47.847463 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:06:47.852971 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:06:48.031668 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:06:48.037035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:48.047402 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Sep 13 00:06:48.047444 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:48.050704 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:48.050742 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:06:48.059274 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:06:48.059345 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:06:48.062893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:48.081555 ignition[965]: INFO : Ignition 2.19.0 Sep 13 00:06:48.082895 ignition[965]: INFO : Stage: files Sep 13 00:06:48.082895 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:48.082895 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:48.086289 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:48.087483 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:48.087483 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:48.091282 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:48.092539 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:48.093737 unknown[965]: wrote ssh authorized keys file for user: core Sep 13 00:06:48.094682 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:48.096128 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:06:48.097236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:06:48.332114 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:06:48.600069 systemd-networkd[786]: eth0: Gained IPv6LL Sep 13 00:06:48.664060 systemd-networkd[786]: eth1: Gained IPv6LL Sep 13 00:06:48.886015 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:06:48.886015 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:48.888991 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:06:49.333394 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:06:49.477011 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:49.477011 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:49.480077 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:49.480077 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:49.480077 ignition[965]: INFO : files: files passed Sep 13 00:06:49.480077 ignition[965]: INFO : Ignition finished successfully Sep 13 00:06:49.480473 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:06:49.488026 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:06:49.492056 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:06:49.493668 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:49.494627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:06:49.502754 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:49.502754 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:49.505075 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:49.506761 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:49.508124 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:06:49.515014 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:06:49.529662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:49.529735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:06:49.530552 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:06:49.531397 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:06:49.532612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:06:49.541014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:06:49.549477 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:49.555003 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:06:49.561989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:49.563146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:49.564325 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:06:49.564803 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:49.564910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:49.565514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:06:49.566046 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:06:49.567007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:06:49.568060 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:49.569234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:49.570247 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:06:49.571222 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:49.572399 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:06:49.573549 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:06:49.574698 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:06:49.575775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:49.575869 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:49.577464 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:49.578164 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:49.579158 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:06:49.579231 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:49.580167 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:49.580244 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:49.581707 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:49.581796 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:49.582475 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:49.582550 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:06:49.583585 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:06:49.583673 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:06:49.592215 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:06:49.592662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:49.592788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:49.596009 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:06:49.596431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:49.596570 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:49.597186 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:49.597298 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:49.603744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:49.603818 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:06:49.613037 ignition[1019]: INFO : Ignition 2.19.0 Sep 13 00:06:49.618822 ignition[1019]: INFO : Stage: umount Sep 13 00:06:49.618822 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:49.618822 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:06:49.618822 ignition[1019]: INFO : umount: umount passed Sep 13 00:06:49.618822 ignition[1019]: INFO : Ignition finished successfully Sep 13 00:06:49.614814 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:49.618811 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:49.618905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:06:49.620078 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:49.620114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:06:49.621213 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:49.621247 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:06:49.632264 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:06:49.632300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:06:49.633194 systemd[1]: Stopped target network.target - Network. Sep 13 00:06:49.634336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:49.634388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:49.635658 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:06:49.636684 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:49.638894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:49.639683 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:06:49.642423 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:06:49.643523 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:49.643552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:49.644010 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:49.644036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:49.645060 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:49.645093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:06:49.646065 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:06:49.646098 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:49.647156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:06:49.648147 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:06:49.650279 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:49.650348 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:06:49.651455 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:49.651521 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:06:49.651913 systemd-networkd[786]: eth1: DHCPv6 lease lost Sep 13 00:06:49.654249 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:49.654307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:49.655353 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:06:49.655387 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:49.656915 systemd-networkd[786]: eth0: DHCPv6 lease lost Sep 13 00:06:49.658041 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:49.658119 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:06:49.659204 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:49.659228 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:49.666841 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:06:49.667492 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:49.667544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:49.668573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:49.668605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:49.669715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:49.669755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:49.670906 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:49.679353 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:49.679425 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:06:49.685137 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:49.685244 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:49.686399 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:49.686442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:49.687284 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:49.687308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:49.688248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:49.688282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:49.689646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:49.689680 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:49.690672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:49.690713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:49.696975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:06:49.697597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:06:49.697646 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:49.698125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:49.698155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:49.703078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:49.703149 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:06:49.704284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:06:49.706118 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:06:49.714132 systemd[1]: Switching root. Sep 13 00:06:49.748573 systemd-journald[187]: Journal stopped Sep 13 00:06:50.512554 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:50.512596 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:50.512610 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:50.512622 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:50.512629 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:50.512648 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:50.512656 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:50.512663 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:50.512671 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:50.512678 kernel: audit: type=1403 audit(1757722009.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:50.512690 systemd[1]: Successfully loaded SELinux policy in 39.431ms. Sep 13 00:06:50.512703 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.326ms. Sep 13 00:06:50.512714 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:50.512722 systemd[1]: Detected virtualization kvm. Sep 13 00:06:50.512730 systemd[1]: Detected architecture x86-64. Sep 13 00:06:50.512737 systemd[1]: Detected first boot. Sep 13 00:06:50.512745 systemd[1]: Hostname set to . Sep 13 00:06:50.512779 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:50.512791 zram_generator::config[1063]: No configuration found. Sep 13 00:06:50.512805 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:50.512813 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:06:50.512821 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:06:50.512828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:50.512839 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:06:50.512860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:06:50.512870 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:06:50.512886 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:06:50.512896 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:06:50.512907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:06:50.512915 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:06:50.512923 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:06:50.512952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:50.512962 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:50.512971 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:06:50.512979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:06:50.512987 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:06:50.512995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:50.513005 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:06:50.513013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:50.513021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:06:50.513029 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:06:50.513038 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:50.513046 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:06:50.513055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:50.513063 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:50.513906 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:50.513917 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:50.513926 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:06:50.513935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:06:50.513943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:50.513952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:50.513960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:50.513971 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:06:50.513979 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:06:50.513987 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:06:50.513995 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:06:50.514003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:50.514012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:06:50.514020 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:06:50.514028 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:06:50.514037 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:50.514047 systemd[1]: Reached target machines.target - Containers. Sep 13 00:06:50.514058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:06:50.514068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:50.514076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:50.514084 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:06:50.514094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:50.514102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:50.514109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:50.514117 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:06:50.514126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:50.514134 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:50.514144 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:06:50.514152 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:06:50.514160 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:06:50.514169 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:06:50.514177 kernel: fuse: init (API version 7.39) Sep 13 00:06:50.514185 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:50.514194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:50.514202 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:06:50.514209 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:06:50.514217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:50.514225 kernel: loop: module loaded Sep 13 00:06:50.514232 kernel: ACPI: bus type drm_connector registered Sep 13 00:06:50.514241 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:06:50.514249 systemd[1]: Stopped verity-setup.service. Sep 13 00:06:50.514257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:50.514266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:06:50.514274 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:06:50.514303 systemd-journald[1139]: Collecting audit messages is disabled. Sep 13 00:06:50.514321 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:06:50.514332 systemd-journald[1139]: Journal started Sep 13 00:06:50.514363 systemd-journald[1139]: Runtime Journal (/run/log/journal/24131e2a6e9a46eab2c50cb67e0bdba3) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:06:50.263526 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:50.280558 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:06:50.281020 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:06:50.518901 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:50.519227 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:06:50.519844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:06:50.520440 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:06:50.521119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:06:50.521821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:50.522544 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:50.522655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:06:50.523383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:50.523524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:50.524288 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:50.524477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:50.525321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:50.525581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:50.526339 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:50.526456 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:06:50.527386 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:50.527583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:50.528395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:50.529178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:06:50.529879 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:06:50.537257 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:06:50.542985 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:06:50.546894 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:06:50.547710 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:50.547795 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:50.549322 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:06:50.552949 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:06:50.557504 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:06:50.559059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:50.563944 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:06:50.571006 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:06:50.571874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:50.573300 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:06:50.574248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:50.574999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:50.578715 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:06:50.580808 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:06:50.583503 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:06:50.584738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:50.586254 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:06:50.586931 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:06:50.593152 systemd-journald[1139]: Time spent on flushing to /var/log/journal/24131e2a6e9a46eab2c50cb67e0bdba3 is 20.896ms for 1130 entries. Sep 13 00:06:50.593152 systemd-journald[1139]: System Journal (/var/log/journal/24131e2a6e9a46eab2c50cb67e0bdba3) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:06:50.632985 systemd-journald[1139]: Received client request to flush runtime journal. Sep 13 00:06:50.633017 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:06:50.604042 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:06:50.605111 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:06:50.607148 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:06:50.619183 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:06:50.636191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:06:50.643874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:50.645812 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:06:50.650711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:50.651718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:50.652926 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:06:50.664678 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:06:50.662916 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:06:50.668504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:50.692124 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 13 00:06:50.692424 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 13 00:06:50.699572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:50.703901 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:06:50.747919 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:06:50.791880 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:06:50.795881 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:06:50.814986 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:06:50.833918 kernel: loop7: detected capacity change from 0 to 140768 Sep 13 00:06:50.851579 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:06:50.851978 (sd-merge)[1209]: Merged extensions into '/usr'. Sep 13 00:06:50.858462 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:06:50.858544 systemd[1]: Reloading... Sep 13 00:06:50.923882 zram_generator::config[1235]: No configuration found. Sep 13 00:06:51.017188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:51.055391 systemd[1]: Reloading finished in 196 ms. Sep 13 00:06:51.057865 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:51.078391 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:06:51.079117 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:06:51.086026 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:51.090520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:51.096934 systemd[1]: Reloading requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:06:51.096945 systemd[1]: Reloading... Sep 13 00:06:51.118921 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:51.119475 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:06:51.120102 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:51.120307 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 13 00:06:51.120349 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 13 00:06:51.123307 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:51.123374 systemd-tmpfiles[1279]: Skipping /boot Sep 13 00:06:51.131295 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:51.131376 systemd-tmpfiles[1279]: Skipping /boot Sep 13 00:06:51.169885 zram_generator::config[1301]: No configuration found. Sep 13 00:06:51.256398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:51.293706 systemd[1]: Reloading finished in 196 ms. Sep 13 00:06:51.309300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:06:51.315169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:51.320969 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:51.324013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:06:51.327079 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:06:51.332613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:51.339006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:51.342164 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:06:51.356705 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:06:51.360787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.361026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:51.364275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:51.369024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:51.372040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:51.372554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:51.372632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.373632 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:06:51.383070 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:06:51.387160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.387316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:51.387471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:51.387574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.391486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.393068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:51.396101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:51.397040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:51.397147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.398280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:06:51.399752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:51.400110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:51.409299 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:51.412321 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 13 00:06:51.418356 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:06:51.424377 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:06:51.434704 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:51.435078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:51.436057 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:06:51.445118 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:51.445270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:51.447743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:51.452930 augenrules[1389]: No rules Sep 13 00:06:51.454032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:51.454314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:51.456315 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:51.460890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:51.472044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:51.472917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:51.473193 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:06:51.475753 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:51.536248 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:06:51.538008 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:06:51.551463 systemd-resolved[1354]: Positive Trust Anchors: Sep 13 00:06:51.551740 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:51.551807 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:51.563789 systemd-resolved[1354]: Using system hostname 'ci-4081-3-5-n-8d584fda4c'. Sep 13 00:06:51.563985 systemd-networkd[1400]: lo: Link UP Sep 13 00:06:51.564184 systemd-networkd[1400]: lo: Gained carrier Sep 13 00:06:51.564729 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:06:51.566472 systemd-networkd[1400]: Enumeration completed Sep 13 00:06:51.566628 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:51.573391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:06:51.574549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:51.575605 systemd[1]: Reached target network.target - Network. Sep 13 00:06:51.576395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:51.610451 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:51.610616 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:51.611217 systemd-networkd[1400]: eth0: Link UP Sep 13 00:06:51.611300 systemd-networkd[1400]: eth0: Gained carrier Sep 13 00:06:51.611372 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:51.616877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1399) Sep 13 00:06:51.627998 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:51.628011 systemd-networkd[1400]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:51.629941 systemd-networkd[1400]: eth1: Link UP Sep 13 00:06:51.629953 systemd-networkd[1400]: eth1: Gained carrier Sep 13 00:06:51.629970 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:51.648961 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:06:51.655865 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:06:51.660954 systemd-networkd[1400]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:06:51.662213 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:06:51.672958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:06:51.673880 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:06:51.677147 systemd-networkd[1400]: eth0: DHCPv4 address 65.108.146.26/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:06:51.677785 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:06:51.682922 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:06:51.684032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:06:51.699868 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 13 00:06:51.702020 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 13 00:06:51.704198 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:06:51.705017 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:06:51.705052 kernel: [drm] features: -context_init Sep 13 00:06:51.706925 kernel: [drm] number of scanouts: 1 Sep 13 00:06:51.706950 kernel: [drm] number of cap sets: 0 Sep 13 00:06:51.709883 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:06:51.710551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:06:51.713290 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:06:51.713340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.713419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:51.728084 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:06:51.728133 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:06:51.728151 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:06:51.731904 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:06:51.732085 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:06:51.732206 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:06:51.730455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:51.739879 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:06:51.760068 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:06:51.769023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:51.776658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:51.776842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:51.776970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:51.776982 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:51.777275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:51.777381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:51.785918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:51.786067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:51.791423 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:51.791664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:51.802182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:51.802281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:51.809048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:51.812689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:51.812820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:51.821988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:51.866377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:51.931818 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:51.937064 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:51.947349 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:51.974496 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:51.976196 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:51.976300 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:51.976456 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:51.976546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:51.976801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:51.976955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:51.977467 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:51.978262 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:51.978339 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:51.978840 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:51.983939 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:51.986659 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:51.995751 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:51.998185 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:52.000591 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:52.001135 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:52.001555 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:52.005019 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:52.005048 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:52.008995 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:52.015023 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:06:52.020905 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:52.019004 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:52.024953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:52.028687 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:52.029213 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:52.041999 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:52.045157 coreos-metadata[1464]: Sep 13 00:06:52.044 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:06:52.045505 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:52.053405 coreos-metadata[1464]: Sep 13 00:06:52.050 INFO Fetch successful Sep 13 00:06:52.053405 coreos-metadata[1464]: Sep 13 00:06:52.050 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:06:52.049979 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:06:52.053805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:52.062021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:52.066359 coreos-metadata[1464]: Sep 13 00:06:52.065 INFO Fetch successful Sep 13 00:06:52.066768 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:52.069260 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:52.069580 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:52.072594 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:52.075480 jq[1468]: false Sep 13 00:06:52.079995 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:52.081007 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:52.087130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:52.087603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:52.101285 dbus-daemon[1467]: [system] SELinux support is enabled Sep 13 00:06:52.101603 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:52.114125 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:52.114258 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:52.117156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:52.117188 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:52.117593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:52.117608 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:52.127996 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:52.129241 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:52.132558 jq[1481]: true Sep 13 00:06:52.140167 extend-filesystems[1469]: Found loop4 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found loop5 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found loop6 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found loop7 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda1 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda2 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda3 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found usr Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda4 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda6 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda7 Sep 13 00:06:52.140167 extend-filesystems[1469]: Found sda9 Sep 13 00:06:52.140167 extend-filesystems[1469]: Checking size of /dev/sda9 Sep 13 00:06:52.190209 tar[1486]: linux-amd64/helm Sep 13 00:06:52.190375 update_engine[1479]: I20250913 00:06:52.164880 1479 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:52.190375 update_engine[1479]: I20250913 00:06:52.168583 1479 update_check_scheduler.cc:74] Next update check in 6m11s Sep 13 00:06:52.162163 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:52.168537 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:52.190762 jq[1503]: true Sep 13 00:06:52.173998 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:52.201241 extend-filesystems[1469]: Resized partition /dev/sda9 Sep 13 00:06:52.202994 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:06:52.216592 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:06:52.236714 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:06:52.238213 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:52.277402 systemd-logind[1477]: New seat seat0. Sep 13 00:06:52.281560 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1413) Sep 13 00:06:52.293559 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:06:52.294662 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:06:52.295006 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:52.329772 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:52.332841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:52.343157 systemd[1]: Starting sshkeys.service... Sep 13 00:06:52.387945 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:06:52.403095 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:06:52.421339 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:06:52.427946 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:52.439966 coreos-metadata[1545]: Sep 13 00:06:52.432 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:06:52.439966 coreos-metadata[1545]: Sep 13 00:06:52.435 INFO Fetch successful Sep 13 00:06:52.442042 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:06:52.442042 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:06:52.442042 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:06:52.451188 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Sep 13 00:06:52.451188 extend-filesystems[1469]: Found sr0 Sep 13 00:06:52.446048 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:52.446182 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:52.447748 unknown[1545]: wrote ssh authorized keys file for user: core Sep 13 00:06:52.491695 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:52.484430 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:06:52.491919 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:52.490448 systemd[1]: Finished sshkeys.service. Sep 13 00:06:52.507030 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:52.519135 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:52.531564 containerd[1508]: time="2025-09-13T00:06:52.531068792Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:52.533175 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:52.533319 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:52.541149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:52.569813 containerd[1508]: time="2025-09-13T00:06:52.569768308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.572845 containerd[1508]: time="2025-09-13T00:06:52.572815251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.572938893Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.572959802Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.573101968Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.573128729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.573218006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573432 containerd[1508]: time="2025-09-13T00:06:52.573232674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573700 containerd[1508]: time="2025-09-13T00:06:52.573681425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573782 containerd[1508]: time="2025-09-13T00:06:52.573760082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.573827 containerd[1508]: time="2025-09-13T00:06:52.573816699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:52.574178 containerd[1508]: time="2025-09-13T00:06:52.574162407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.574321 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:52.577837 containerd[1508]: time="2025-09-13T00:06:52.574624273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.578349 containerd[1508]: time="2025-09-13T00:06:52.578164571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:52.578498 containerd[1508]: time="2025-09-13T00:06:52.578480894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:52.578556 containerd[1508]: time="2025-09-13T00:06:52.578544354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:52.579589 containerd[1508]: time="2025-09-13T00:06:52.578678174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:52.579714 containerd[1508]: time="2025-09-13T00:06:52.579698327Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:52.584749 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:52.587182 containerd[1508]: time="2025-09-13T00:06:52.587156412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587298018Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587319678Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587348482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587362879Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587487102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587684232Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587767208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587782116Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587793457Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587804978Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587815819Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587824975Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.587870 containerd[1508]: time="2025-09-13T00:06:52.587836026Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.588591 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:06:52.589476 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590102757Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590147801Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590162078Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590175984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590198527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590212021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590223483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590234694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590244342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590254802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590263889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590274769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590284938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590715 containerd[1508]: time="2025-09-13T00:06:52.590297281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590307050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590316738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590326977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590339100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590369907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590380447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590388743Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590420463Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590435932Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590444968Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590453554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590460397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590469234Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:52.590934 containerd[1508]: time="2025-09-13T00:06:52.590478181Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:52.591128 containerd[1508]: time="2025-09-13T00:06:52.590485595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:52.591155 containerd[1508]: time="2025-09-13T00:06:52.590729121Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:52.591155 containerd[1508]: time="2025-09-13T00:06:52.590782691Z" level=info msg="Connect containerd service" Sep 13 00:06:52.591155 containerd[1508]: time="2025-09-13T00:06:52.590819070Z" level=info msg="using legacy CRI server" Sep 13 00:06:52.591155 containerd[1508]: time="2025-09-13T00:06:52.590825061Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:52.591155 containerd[1508]: time="2025-09-13T00:06:52.590985862Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591425066Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591692237Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591726932Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591798807Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591827461Z" level=info msg="Start recovering state" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591902682Z" level=info msg="Start event monitor" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591915285Z" level=info msg="Start snapshots syncer" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591922218Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591929241Z" level=info msg="Start streaming server" Sep 13 00:06:52.594688 containerd[1508]: time="2025-09-13T00:06:52.591975558Z" level=info msg="containerd successfully booted in 0.061689s" Sep 13 00:06:52.592195 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:52.759981 systemd-networkd[1400]: eth0: Gained IPv6LL Sep 13 00:06:52.760459 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:06:52.764618 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:52.765397 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:52.775987 tar[1486]: linux-amd64/LICENSE Sep 13 00:06:52.776053 tar[1486]: linux-amd64/README.md Sep 13 00:06:52.779050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:52.781062 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:52.792234 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:52.810404 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:53.336063 systemd-networkd[1400]: eth1: Gained IPv6LL Sep 13 00:06:53.336526 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:06:53.702587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:53.707291 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:53.710023 systemd[1]: Startup finished in 1.223s (kernel) + 5.178s (initrd) + 3.883s (userspace) = 10.285s. Sep 13 00:06:53.715238 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:54.255049 kubelet[1596]: E0913 00:06:54.254951 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:54.257177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:54.257330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:01.747309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:07:01.749022 systemd[1]: Started sshd@0-65.108.146.26:22-147.75.109.163:57592.service - OpenSSH per-connection server daemon (147.75.109.163:57592). Sep 13 00:07:02.834264 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 57592 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:02.836246 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:02.845358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:07:02.857455 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:07:02.860448 systemd-logind[1477]: New session 1 of user core. Sep 13 00:07:02.870222 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:07:02.882164 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:07:02.885416 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:02.982590 systemd[1612]: Queued start job for default target default.target. Sep 13 00:07:02.987765 systemd[1612]: Created slice app.slice - User Application Slice. Sep 13 00:07:02.987793 systemd[1612]: Reached target paths.target - Paths. Sep 13 00:07:02.987805 systemd[1612]: Reached target timers.target - Timers. Sep 13 00:07:02.988971 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:07:03.000842 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:07:03.000971 systemd[1612]: Reached target sockets.target - Sockets. Sep 13 00:07:03.000985 systemd[1612]: Reached target basic.target - Basic System. Sep 13 00:07:03.001019 systemd[1612]: Reached target default.target - Main User Target. Sep 13 00:07:03.001043 systemd[1612]: Startup finished in 109ms. Sep 13 00:07:03.001409 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:07:03.017071 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:07:03.769770 systemd[1]: Started sshd@1-65.108.146.26:22-147.75.109.163:57598.service - OpenSSH per-connection server daemon (147.75.109.163:57598). Sep 13 00:07:04.394323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:07:04.406049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:04.488001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:04.490670 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:04.523285 kubelet[1633]: E0913 00:07:04.523244 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:04.526682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:04.526880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:04.838457 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 57598 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:04.839471 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:04.843803 systemd-logind[1477]: New session 2 of user core. Sep 13 00:07:04.857034 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:07:05.577642 sshd[1623]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:05.581664 systemd[1]: sshd@1-65.108.146.26:22-147.75.109.163:57598.service: Deactivated successfully. Sep 13 00:07:05.584173 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:07:05.586011 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:07:05.587603 systemd-logind[1477]: Removed session 2. Sep 13 00:07:05.767272 systemd[1]: Started sshd@2-65.108.146.26:22-147.75.109.163:57612.service - OpenSSH per-connection server daemon (147.75.109.163:57612). Sep 13 00:07:06.851974 sshd[1646]: Accepted publickey for core from 147.75.109.163 port 57612 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:06.853225 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.857287 systemd-logind[1477]: New session 3 of user core. Sep 13 00:07:06.861984 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:07:07.590897 sshd[1646]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:07.593381 systemd[1]: sshd@2-65.108.146.26:22-147.75.109.163:57612.service: Deactivated successfully. Sep 13 00:07:07.595179 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:07:07.595973 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:07:07.597061 systemd-logind[1477]: Removed session 3. Sep 13 00:07:07.739660 systemd[1]: Started sshd@3-65.108.146.26:22-147.75.109.163:57616.service - OpenSSH per-connection server daemon (147.75.109.163:57616). Sep 13 00:07:08.707981 sshd[1653]: Accepted publickey for core from 147.75.109.163 port 57616 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:08.709421 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:08.714899 systemd-logind[1477]: New session 4 of user core. Sep 13 00:07:08.721066 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:07:09.382269 sshd[1653]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:09.384923 systemd[1]: sshd@3-65.108.146.26:22-147.75.109.163:57616.service: Deactivated successfully. Sep 13 00:07:09.386510 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:07:09.387502 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:07:09.388512 systemd-logind[1477]: Removed session 4. Sep 13 00:07:09.549945 systemd[1]: Started sshd@4-65.108.146.26:22-147.75.109.163:45568.service - OpenSSH per-connection server daemon (147.75.109.163:45568). Sep 13 00:07:10.523122 sshd[1660]: Accepted publickey for core from 147.75.109.163 port 45568 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:10.524561 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:10.530114 systemd-logind[1477]: New session 5 of user core. Sep 13 00:07:10.536040 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:07:11.051141 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:07:11.051519 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:11.066436 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:11.225056 sshd[1660]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:11.228931 systemd[1]: sshd@4-65.108.146.26:22-147.75.109.163:45568.service: Deactivated successfully. Sep 13 00:07:11.230522 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:07:11.231204 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:07:11.232353 systemd-logind[1477]: Removed session 5. Sep 13 00:07:11.428454 systemd[1]: Started sshd@5-65.108.146.26:22-147.75.109.163:45574.service - OpenSSH per-connection server daemon (147.75.109.163:45574). Sep 13 00:07:12.501186 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 45574 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:12.502389 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:12.506811 systemd-logind[1477]: New session 6 of user core. Sep 13 00:07:12.512045 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:07:13.070121 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:07:13.070392 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:13.073832 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:13.078970 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:07:13.079260 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:13.091105 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:13.094931 auditctl[1675]: No rules Sep 13 00:07:13.095425 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:07:13.095629 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:13.097839 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:13.123192 augenrules[1693]: No rules Sep 13 00:07:13.124067 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:13.125141 sudo[1671]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:13.299934 sshd[1668]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:13.302655 systemd[1]: sshd@5-65.108.146.26:22-147.75.109.163:45574.service: Deactivated successfully. Sep 13 00:07:13.304642 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:07:13.306020 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:07:13.307072 systemd-logind[1477]: Removed session 6. Sep 13 00:07:13.447530 systemd[1]: Started sshd@6-65.108.146.26:22-147.75.109.163:45582.service - OpenSSH per-connection server daemon (147.75.109.163:45582). Sep 13 00:07:14.413326 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 45582 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:07:14.414809 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:14.420060 systemd-logind[1477]: New session 7 of user core. Sep 13 00:07:14.429014 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:07:14.644503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:07:14.650014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:14.750231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:14.754174 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:14.788490 kubelet[1712]: E0913 00:07:14.788423 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:14.790842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:14.791005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:14.928000 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:07:14.928261 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:15.177219 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:07:15.177320 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:07:15.421588 dockerd[1736]: time="2025-09-13T00:07:15.421518806Z" level=info msg="Starting up" Sep 13 00:07:15.510810 dockerd[1736]: time="2025-09-13T00:07:15.510431958Z" level=info msg="Loading containers: start." Sep 13 00:07:15.600888 kernel: Initializing XFRM netlink socket Sep 13 00:07:15.623633 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 13 00:07:15.674772 systemd-networkd[1400]: docker0: Link UP Sep 13 00:07:15.689026 dockerd[1736]: time="2025-09-13T00:07:15.688982188Z" level=info msg="Loading containers: done." Sep 13 00:07:15.702931 dockerd[1736]: time="2025-09-13T00:07:15.702888754Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:07:15.703053 dockerd[1736]: time="2025-09-13T00:07:15.702976549Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:07:15.703080 dockerd[1736]: time="2025-09-13T00:07:15.703059063Z" level=info msg="Daemon has completed initialization" Sep 13 00:07:15.728325 dockerd[1736]: time="2025-09-13T00:07:15.728272481Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:07:15.728632 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:07:16.620248 systemd-resolved[1354]: Clock change detected. Flushing caches. Sep 13 00:07:16.620536 systemd-timesyncd[1377]: Contacted time server 78.47.56.71:123 (2.flatcar.pool.ntp.org). Sep 13 00:07:16.620581 systemd-timesyncd[1377]: Initial clock synchronization to Sat 2025-09-13 00:07:16.619842 UTC. Sep 13 00:07:17.594221 containerd[1508]: time="2025-09-13T00:07:17.594169670Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:07:18.116364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911746532.mount: Deactivated successfully. Sep 13 00:07:19.094585 containerd[1508]: time="2025-09-13T00:07:19.094525921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:19.095621 containerd[1508]: time="2025-09-13T00:07:19.095578034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117224" Sep 13 00:07:19.096461 containerd[1508]: time="2025-09-13T00:07:19.096414733Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:19.099172 containerd[1508]: time="2025-09-13T00:07:19.099113172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:19.100899 containerd[1508]: time="2025-09-13T00:07:19.100441623Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.506230075s" Sep 13 00:07:19.100899 containerd[1508]: time="2025-09-13T00:07:19.100514460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:07:19.101646 containerd[1508]: time="2025-09-13T00:07:19.101466856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:07:20.330422 containerd[1508]: time="2025-09-13T00:07:20.330363102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:20.331628 containerd[1508]: time="2025-09-13T00:07:20.331417098Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716654" Sep 13 00:07:20.333914 containerd[1508]: time="2025-09-13T00:07:20.332479810Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:20.335164 containerd[1508]: time="2025-09-13T00:07:20.334969269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:20.336156 containerd[1508]: time="2025-09-13T00:07:20.336104036Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.234610801s" Sep 13 00:07:20.336219 containerd[1508]: time="2025-09-13T00:07:20.336161514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:07:20.336958 containerd[1508]: time="2025-09-13T00:07:20.336924865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:07:21.342269 containerd[1508]: time="2025-09-13T00:07:21.342198937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.343286 containerd[1508]: time="2025-09-13T00:07:21.343154809Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787720" Sep 13 00:07:21.344161 containerd[1508]: time="2025-09-13T00:07:21.343961401Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.347611 containerd[1508]: time="2025-09-13T00:07:21.346663649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.347611 containerd[1508]: time="2025-09-13T00:07:21.347462306Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.01051083s" Sep 13 00:07:21.347611 containerd[1508]: time="2025-09-13T00:07:21.347486822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:07:21.348081 containerd[1508]: time="2025-09-13T00:07:21.347971361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:07:22.284229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092965593.mount: Deactivated successfully. Sep 13 00:07:22.534812 containerd[1508]: time="2025-09-13T00:07:22.534751013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.536645 containerd[1508]: time="2025-09-13T00:07:22.536544496Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410280" Sep 13 00:07:22.539062 containerd[1508]: time="2025-09-13T00:07:22.538207164Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.541251 containerd[1508]: time="2025-09-13T00:07:22.540610110Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.192384873s" Sep 13 00:07:22.541251 containerd[1508]: time="2025-09-13T00:07:22.540638553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:07:22.541251 containerd[1508]: time="2025-09-13T00:07:22.540760392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.541338 containerd[1508]: time="2025-09-13T00:07:22.541299242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:07:23.014845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205341591.mount: Deactivated successfully. Sep 13 00:07:23.393428 systemd[1]: Started sshd@7-65.108.146.26:22-210.231.185.88:27672.service - OpenSSH per-connection server daemon (210.231.185.88:27672). Sep 13 00:07:23.727958 containerd[1508]: time="2025-09-13T00:07:23.727899658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.729301 containerd[1508]: time="2025-09-13T00:07:23.729246294Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 13 00:07:23.729651 containerd[1508]: time="2025-09-13T00:07:23.729622168Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.732548 containerd[1508]: time="2025-09-13T00:07:23.732511566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.734978 containerd[1508]: time="2025-09-13T00:07:23.734310099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.192986281s" Sep 13 00:07:23.734978 containerd[1508]: time="2025-09-13T00:07:23.734342539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:07:23.735244 containerd[1508]: time="2025-09-13T00:07:23.735205347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:07:24.158327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568026671.mount: Deactivated successfully. Sep 13 00:07:24.164867 containerd[1508]: time="2025-09-13T00:07:24.164793870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:24.165530 containerd[1508]: time="2025-09-13T00:07:24.165501597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 13 00:07:24.167616 containerd[1508]: time="2025-09-13T00:07:24.166254068Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:24.169325 containerd[1508]: time="2025-09-13T00:07:24.168510749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:24.169325 containerd[1508]: time="2025-09-13T00:07:24.169160217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.925114ms" Sep 13 00:07:24.169325 containerd[1508]: time="2025-09-13T00:07:24.169201935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:07:24.169994 containerd[1508]: time="2025-09-13T00:07:24.169966739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:07:24.646525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167730452.mount: Deactivated successfully. Sep 13 00:07:24.870652 sshd[1999]: Invalid user mani from 210.231.185.88 port 27672 Sep 13 00:07:25.152656 sshd[1999]: Received disconnect from 210.231.185.88 port 27672:11: Bye Bye [preauth] Sep 13 00:07:25.152656 sshd[1999]: Disconnected from invalid user mani 210.231.185.88 port 27672 [preauth] Sep 13 00:07:25.154424 systemd[1]: sshd@7-65.108.146.26:22-210.231.185.88:27672.service: Deactivated successfully. Sep 13 00:07:25.690628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:07:25.695384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:25.808966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:25.813023 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:25.876377 kubelet[2071]: E0913 00:07:25.876337 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:25.878043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:25.878272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:25.930413 containerd[1508]: time="2025-09-13T00:07:25.929407360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:25.930413 containerd[1508]: time="2025-09-13T00:07:25.930363853Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910785" Sep 13 00:07:25.930967 containerd[1508]: time="2025-09-13T00:07:25.930921248Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:25.933413 containerd[1508]: time="2025-09-13T00:07:25.933379397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:25.934709 containerd[1508]: time="2025-09-13T00:07:25.934317236Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.764319099s" Sep 13 00:07:25.934709 containerd[1508]: time="2025-09-13T00:07:25.934358413Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:07:28.275998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:28.283579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:28.318751 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-7.scope)... Sep 13 00:07:28.318770 systemd[1]: Reloading... Sep 13 00:07:28.431231 zram_generator::config[2150]: No configuration found. Sep 13 00:07:28.520248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:28.589241 systemd[1]: Reloading finished in 270 ms. Sep 13 00:07:28.633014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:28.635069 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:28.635353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:28.639400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:28.740525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:28.748417 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:28.791327 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:28.791653 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:28.791703 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:28.791840 kubelet[2203]: I0913 00:07:28.791812 2203 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:29.259470 kubelet[2203]: I0913 00:07:29.259436 2203 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:07:29.259637 kubelet[2203]: I0913 00:07:29.259624 2203 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:29.260511 kubelet[2203]: I0913 00:07:29.260493 2203 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:07:29.284788 kubelet[2203]: I0913 00:07:29.284722 2203 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:29.288124 kubelet[2203]: E0913 00:07:29.288047 2203 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.108.146.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:29.297719 kubelet[2203]: E0913 00:07:29.297667 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:29.297719 kubelet[2203]: I0913 00:07:29.297711 2203 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:29.305773 kubelet[2203]: I0913 00:07:29.305729 2203 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:29.305863 kubelet[2203]: I0913 00:07:29.305849 2203 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:07:29.306031 kubelet[2203]: I0913 00:07:29.305983 2203 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:29.306301 kubelet[2203]: I0913 00:07:29.306023 2203 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-8d584fda4c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:29.306459 kubelet[2203]: I0913 00:07:29.306314 2203 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:29.306459 kubelet[2203]: I0913 00:07:29.306329 2203 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:07:29.306504 kubelet[2203]: I0913 00:07:29.306462 2203 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:29.310583 kubelet[2203]: I0913 00:07:29.309919 2203 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:07:29.310583 kubelet[2203]: I0913 00:07:29.309950 2203 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:29.310583 kubelet[2203]: I0913 00:07:29.309987 2203 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:07:29.310583 kubelet[2203]: I0913 00:07:29.310007 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:29.317244 kubelet[2203]: W0913 00:07:29.317195 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.108.146.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-8d584fda4c&limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:29.317364 kubelet[2203]: E0913 00:07:29.317347 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.108.146.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-8d584fda4c&limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:29.317518 kubelet[2203]: I0913 00:07:29.317505 2203 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:29.320571 kubelet[2203]: I0913 00:07:29.320554 2203 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:29.320691 kubelet[2203]: W0913 00:07:29.320680 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:07:29.323315 kubelet[2203]: W0913 00:07:29.322904 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.108.146.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:29.323315 kubelet[2203]: E0913 00:07:29.322987 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.108.146.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:29.323412 kubelet[2203]: I0913 00:07:29.323389 2203 server.go:1274] "Started kubelet" Sep 13 00:07:29.323815 kubelet[2203]: I0913 00:07:29.323757 2203 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:29.331145 kubelet[2203]: I0913 00:07:29.331093 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:29.332286 kubelet[2203]: I0913 00:07:29.331527 2203 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:29.332837 kubelet[2203]: I0913 00:07:29.332804 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:29.334890 kubelet[2203]: I0913 00:07:29.334853 2203 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:07:29.339780 kubelet[2203]: E0913 00:07:29.338348 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.108.146.26:6443/api/v1/namespaces/default/events\": dial tcp 65.108.146.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-8d584fda4c.1864aee41bca5013 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-8d584fda4c,UID:ci-4081-3-5-n-8d584fda4c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-8d584fda4c,},FirstTimestamp:2025-09-13 00:07:29.323356179 +0000 UTC m=+0.571487087,LastTimestamp:2025-09-13 00:07:29.323356179 +0000 UTC m=+0.571487087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-8d584fda4c,}" Sep 13 00:07:29.344578 kubelet[2203]: I0913 00:07:29.344541 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:29.346725 kubelet[2203]: E0913 00:07:29.346368 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:29.346725 kubelet[2203]: I0913 00:07:29.346403 2203 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:07:29.347869 kubelet[2203]: I0913 00:07:29.347845 2203 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:07:29.347948 kubelet[2203]: I0913 00:07:29.347921 2203 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:29.348924 kubelet[2203]: I0913 00:07:29.348621 2203 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:29.348924 kubelet[2203]: I0913 00:07:29.348701 2203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:29.351511 kubelet[2203]: E0913 00:07:29.351481 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.146.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-8d584fda4c?timeout=10s\": dial tcp 65.108.146.26:6443: connect: connection refused" interval="200ms" Sep 13 00:07:29.352053 kubelet[2203]: E0913 00:07:29.352034 2203 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:29.353600 kubelet[2203]: I0913 00:07:29.353585 2203 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:29.358777 kubelet[2203]: I0913 00:07:29.358734 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:29.361157 kubelet[2203]: I0913 00:07:29.359644 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:29.361157 kubelet[2203]: I0913 00:07:29.359670 2203 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:07:29.361157 kubelet[2203]: I0913 00:07:29.359688 2203 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:07:29.361157 kubelet[2203]: E0913 00:07:29.359723 2203 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:29.364683 kubelet[2203]: W0913 00:07:29.364641 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.108.146.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:29.364779 kubelet[2203]: E0913 00:07:29.364760 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.108.146.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:29.367537 kubelet[2203]: W0913 00:07:29.367490 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.108.146.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:29.367603 kubelet[2203]: E0913 00:07:29.367548 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.108.146.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:29.388593 kubelet[2203]: I0913 00:07:29.388567 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:07:29.388593 kubelet[2203]: I0913 00:07:29.388585 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:29.388692 kubelet[2203]: I0913 00:07:29.388600 2203 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:29.390683 kubelet[2203]: I0913 00:07:29.390652 2203 policy_none.go:49] "None policy: Start" Sep 13 00:07:29.391369 kubelet[2203]: I0913 00:07:29.391316 2203 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:07:29.391369 kubelet[2203]: I0913 00:07:29.391370 2203 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:29.398962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:07:29.406552 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:07:29.409922 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:07:29.416861 kubelet[2203]: I0913 00:07:29.416841 2203 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:29.417613 kubelet[2203]: I0913 00:07:29.417296 2203 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:29.417613 kubelet[2203]: I0913 00:07:29.417308 2203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:29.417613 kubelet[2203]: I0913 00:07:29.417522 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:29.419968 kubelet[2203]: E0913 00:07:29.419934 2203 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:29.470394 systemd[1]: Created slice kubepods-burstable-pod9c51f6e080fb54331b24ebb35d9986ba.slice - libcontainer container kubepods-burstable-pod9c51f6e080fb54331b24ebb35d9986ba.slice. Sep 13 00:07:29.482035 systemd[1]: Created slice kubepods-burstable-pod6e735031927ef04ee1bba99632badb72.slice - libcontainer container kubepods-burstable-pod6e735031927ef04ee1bba99632badb72.slice. Sep 13 00:07:29.485543 systemd[1]: Created slice kubepods-burstable-pode99f05ca64c5218a6a9655058be0c94b.slice - libcontainer container kubepods-burstable-pode99f05ca64c5218a6a9655058be0c94b.slice. Sep 13 00:07:29.521880 kubelet[2203]: I0913 00:07:29.521740 2203 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.523394 kubelet[2203]: E0913 00:07:29.523189 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.108.146.26:6443/api/v1/nodes\": dial tcp 65.108.146.26:6443: connect: connection refused" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.552177 kubelet[2203]: E0913 00:07:29.552097 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.146.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-8d584fda4c?timeout=10s\": dial tcp 65.108.146.26:6443: connect: connection refused" interval="400ms" Sep 13 00:07:29.649810 kubelet[2203]: I0913 00:07:29.649752 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.649810 kubelet[2203]: I0913 00:07:29.649804 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.649810 kubelet[2203]: I0913 00:07:29.649828 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650030 kubelet[2203]: I0913 00:07:29.649847 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650030 kubelet[2203]: I0913 00:07:29.649862 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650030 kubelet[2203]: I0913 00:07:29.649877 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650030 kubelet[2203]: I0913 00:07:29.649891 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650030 kubelet[2203]: I0913 00:07:29.649904 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.650210 kubelet[2203]: I0913 00:07:29.649918 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e99f05ca64c5218a6a9655058be0c94b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-8d584fda4c\" (UID: \"e99f05ca64c5218a6a9655058be0c94b\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.726026 kubelet[2203]: I0913 00:07:29.725987 2203 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.726480 kubelet[2203]: E0913 00:07:29.726420 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.108.146.26:6443/api/v1/nodes\": dial tcp 65.108.146.26:6443: connect: connection refused" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:29.780096 containerd[1508]: time="2025-09-13T00:07:29.779968965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-8d584fda4c,Uid:9c51f6e080fb54331b24ebb35d9986ba,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:29.789387 containerd[1508]: time="2025-09-13T00:07:29.789348532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-8d584fda4c,Uid:6e735031927ef04ee1bba99632badb72,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:29.789670 containerd[1508]: time="2025-09-13T00:07:29.789630581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-8d584fda4c,Uid:e99f05ca64c5218a6a9655058be0c94b,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:29.952941 kubelet[2203]: E0913 00:07:29.952896 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.146.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-8d584fda4c?timeout=10s\": dial tcp 65.108.146.26:6443: connect: connection refused" interval="800ms" Sep 13 00:07:30.128279 kubelet[2203]: I0913 00:07:30.128179 2203 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:30.128652 kubelet[2203]: E0913 00:07:30.128613 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.108.146.26:6443/api/v1/nodes\": dial tcp 65.108.146.26:6443: connect: connection refused" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:30.195103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228976530.mount: Deactivated successfully. Sep 13 00:07:30.202106 containerd[1508]: time="2025-09-13T00:07:30.202008121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:30.203218 containerd[1508]: time="2025-09-13T00:07:30.203164920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 13 00:07:30.204173 containerd[1508]: time="2025-09-13T00:07:30.204099192Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:30.205613 containerd[1508]: time="2025-09-13T00:07:30.205560482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:30.206423 containerd[1508]: time="2025-09-13T00:07:30.206290161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:30.206604 containerd[1508]: time="2025-09-13T00:07:30.206577479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:30.207374 containerd[1508]: time="2025-09-13T00:07:30.207330892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:30.209920 containerd[1508]: time="2025-09-13T00:07:30.209816974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:30.212246 containerd[1508]: time="2025-09-13T00:07:30.212003233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.586213ms" Sep 13 00:07:30.214839 containerd[1508]: time="2025-09-13T00:07:30.214036556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 424.342435ms" Sep 13 00:07:30.215290 containerd[1508]: time="2025-09-13T00:07:30.215237117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 435.17589ms" Sep 13 00:07:30.348893 containerd[1508]: time="2025-09-13T00:07:30.348719450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.351756 containerd[1508]: time="2025-09-13T00:07:30.348766278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.351756 containerd[1508]: time="2025-09-13T00:07:30.350505639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.351756 containerd[1508]: time="2025-09-13T00:07:30.350578065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.352845 containerd[1508]: time="2025-09-13T00:07:30.352778030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.352845 containerd[1508]: time="2025-09-13T00:07:30.352839014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.353124 containerd[1508]: time="2025-09-13T00:07:30.352861296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.353213 containerd[1508]: time="2025-09-13T00:07:30.352949291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.354511 containerd[1508]: time="2025-09-13T00:07:30.354414038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.356252 containerd[1508]: time="2025-09-13T00:07:30.354584407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.356252 containerd[1508]: time="2025-09-13T00:07:30.354621928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.356252 containerd[1508]: time="2025-09-13T00:07:30.354887355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.387301 systemd[1]: Started cri-containerd-81b7c4bb42fcd2590d09003d5031743a91aa05f2f0811e2137ba6e907d0108ab.scope - libcontainer container 81b7c4bb42fcd2590d09003d5031743a91aa05f2f0811e2137ba6e907d0108ab. Sep 13 00:07:30.393227 systemd[1]: Started cri-containerd-2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1.scope - libcontainer container 2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1. Sep 13 00:07:30.395267 systemd[1]: Started cri-containerd-ab166f593a282d1404bd852bf3f837ec2e09749346398f785aa4ca06791b7582.scope - libcontainer container ab166f593a282d1404bd852bf3f837ec2e09749346398f785aa4ca06791b7582. Sep 13 00:07:30.449742 containerd[1508]: time="2025-09-13T00:07:30.449566418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-8d584fda4c,Uid:e99f05ca64c5218a6a9655058be0c94b,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b7c4bb42fcd2590d09003d5031743a91aa05f2f0811e2137ba6e907d0108ab\"" Sep 13 00:07:30.451971 containerd[1508]: time="2025-09-13T00:07:30.451840572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-8d584fda4c,Uid:6e735031927ef04ee1bba99632badb72,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1\"" Sep 13 00:07:30.460976 containerd[1508]: time="2025-09-13T00:07:30.459960678Z" level=info msg="CreateContainer within sandbox \"81b7c4bb42fcd2590d09003d5031743a91aa05f2f0811e2137ba6e907d0108ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:07:30.461814 containerd[1508]: time="2025-09-13T00:07:30.461787123Z" level=info msg="CreateContainer within sandbox \"2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:07:30.470006 containerd[1508]: time="2025-09-13T00:07:30.469972612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-8d584fda4c,Uid:9c51f6e080fb54331b24ebb35d9986ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab166f593a282d1404bd852bf3f837ec2e09749346398f785aa4ca06791b7582\"" Sep 13 00:07:30.472715 containerd[1508]: time="2025-09-13T00:07:30.472693914Z" level=info msg="CreateContainer within sandbox \"ab166f593a282d1404bd852bf3f837ec2e09749346398f785aa4ca06791b7582\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:07:30.482561 containerd[1508]: time="2025-09-13T00:07:30.482512706Z" level=info msg="CreateContainer within sandbox \"81b7c4bb42fcd2590d09003d5031743a91aa05f2f0811e2137ba6e907d0108ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca58cf61a5064465361664f8ae8e45d7f586b787937ff52182d4814ee0365fed\"" Sep 13 00:07:30.483362 containerd[1508]: time="2025-09-13T00:07:30.483323096Z" level=info msg="StartContainer for \"ca58cf61a5064465361664f8ae8e45d7f586b787937ff52182d4814ee0365fed\"" Sep 13 00:07:30.489472 containerd[1508]: time="2025-09-13T00:07:30.489342673Z" level=info msg="CreateContainer within sandbox \"2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002\"" Sep 13 00:07:30.489819 containerd[1508]: time="2025-09-13T00:07:30.489776236Z" level=info msg="StartContainer for \"d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002\"" Sep 13 00:07:30.492938 containerd[1508]: time="2025-09-13T00:07:30.492816757Z" level=info msg="CreateContainer within sandbox \"ab166f593a282d1404bd852bf3f837ec2e09749346398f785aa4ca06791b7582\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a4322f70920f7850a9204c9b7cebae8a90351df8f98cdb8626db2025e9d1441\"" Sep 13 00:07:30.494368 containerd[1508]: time="2025-09-13T00:07:30.494335975Z" level=info msg="StartContainer for \"6a4322f70920f7850a9204c9b7cebae8a90351df8f98cdb8626db2025e9d1441\"" Sep 13 00:07:30.516185 systemd[1]: Started cri-containerd-ca58cf61a5064465361664f8ae8e45d7f586b787937ff52182d4814ee0365fed.scope - libcontainer container ca58cf61a5064465361664f8ae8e45d7f586b787937ff52182d4814ee0365fed. Sep 13 00:07:30.526039 systemd[1]: Started cri-containerd-d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002.scope - libcontainer container d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002. Sep 13 00:07:30.536380 systemd[1]: Started cri-containerd-6a4322f70920f7850a9204c9b7cebae8a90351df8f98cdb8626db2025e9d1441.scope - libcontainer container 6a4322f70920f7850a9204c9b7cebae8a90351df8f98cdb8626db2025e9d1441. Sep 13 00:07:30.545119 kubelet[2203]: W0913 00:07:30.544907 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.108.146.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:30.545119 kubelet[2203]: E0913 00:07:30.545078 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.108.146.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:30.583746 containerd[1508]: time="2025-09-13T00:07:30.583529793Z" level=info msg="StartContainer for \"ca58cf61a5064465361664f8ae8e45d7f586b787937ff52182d4814ee0365fed\" returns successfully" Sep 13 00:07:30.596000 containerd[1508]: time="2025-09-13T00:07:30.593883057Z" level=info msg="StartContainer for \"6a4322f70920f7850a9204c9b7cebae8a90351df8f98cdb8626db2025e9d1441\" returns successfully" Sep 13 00:07:30.598005 containerd[1508]: time="2025-09-13T00:07:30.597980540Z" level=info msg="StartContainer for \"d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002\" returns successfully" Sep 13 00:07:30.603703 kubelet[2203]: W0913 00:07:30.603644 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.108.146.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-8d584fda4c&limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:30.603813 kubelet[2203]: E0913 00:07:30.603710 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.108.146.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-8d584fda4c&limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:30.612352 kubelet[2203]: W0913 00:07:30.612294 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.108.146.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:30.612478 kubelet[2203]: E0913 00:07:30.612360 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.108.146.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:30.733496 kubelet[2203]: W0913 00:07:30.733436 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.108.146.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.108.146.26:6443: connect: connection refused Sep 13 00:07:30.733496 kubelet[2203]: E0913 00:07:30.733504 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.108.146.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.146.26:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:30.753326 kubelet[2203]: E0913 00:07:30.753283 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.146.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-8d584fda4c?timeout=10s\": dial tcp 65.108.146.26:6443: connect: connection refused" interval="1.6s" Sep 13 00:07:30.931411 kubelet[2203]: I0913 00:07:30.931309 2203 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:32.150784 kubelet[2203]: I0913 00:07:32.149947 2203 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:32.150784 kubelet[2203]: E0913 00:07:32.149978 2203 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-8d584fda4c\": node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.166476 kubelet[2203]: E0913 00:07:32.166435 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.267408 kubelet[2203]: E0913 00:07:32.267333 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.367908 kubelet[2203]: E0913 00:07:32.367843 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.468844 kubelet[2203]: E0913 00:07:32.468773 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.569464 kubelet[2203]: E0913 00:07:32.569390 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.670279 kubelet[2203]: E0913 00:07:32.670210 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.771069 kubelet[2203]: E0913 00:07:32.770912 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.872167 kubelet[2203]: E0913 00:07:32.872063 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:32.972936 kubelet[2203]: E0913 00:07:32.972883 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:33.073753 kubelet[2203]: E0913 00:07:33.073501 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:33.174523 kubelet[2203]: E0913 00:07:33.174439 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-8d584fda4c\" not found" Sep 13 00:07:33.326905 kubelet[2203]: I0913 00:07:33.326411 2203 apiserver.go:52] "Watching apiserver" Sep 13 00:07:33.348552 kubelet[2203]: I0913 00:07:33.348501 2203 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:07:34.202345 systemd[1]: Reloading requested from client PID 2473 ('systemctl') (unit session-7.scope)... Sep 13 00:07:34.202363 systemd[1]: Reloading... Sep 13 00:07:34.303163 zram_generator::config[2522]: No configuration found. Sep 13 00:07:34.378225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:34.466281 systemd[1]: Reloading finished in 263 ms. Sep 13 00:07:34.510528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:34.531739 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:34.531985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:34.538450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:34.660889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:34.671509 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:34.742162 kubelet[2564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:34.742162 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:34.742162 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:34.742162 kubelet[2564]: I0913 00:07:34.740601 2564 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:34.747325 kubelet[2564]: I0913 00:07:34.747299 2564 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:07:34.747325 kubelet[2564]: I0913 00:07:34.747317 2564 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:34.747538 kubelet[2564]: I0913 00:07:34.747512 2564 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:07:34.754692 kubelet[2564]: I0913 00:07:34.754172 2564 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:07:34.760200 kubelet[2564]: I0913 00:07:34.760167 2564 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:34.764834 kubelet[2564]: E0913 00:07:34.764747 2564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:34.764944 kubelet[2564]: I0913 00:07:34.764932 2564 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:34.767741 kubelet[2564]: I0913 00:07:34.767719 2564 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:34.767817 kubelet[2564]: I0913 00:07:34.767804 2564 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:07:34.767917 kubelet[2564]: I0913 00:07:34.767878 2564 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:34.768079 kubelet[2564]: I0913 00:07:34.767912 2564 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-8d584fda4c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:34.768203 kubelet[2564]: I0913 00:07:34.768083 2564 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:34.768203 kubelet[2564]: I0913 00:07:34.768093 2564 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:07:34.768203 kubelet[2564]: I0913 00:07:34.768146 2564 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:34.768271 kubelet[2564]: I0913 00:07:34.768221 2564 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:07:34.768271 kubelet[2564]: I0913 00:07:34.768231 2564 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:34.768271 kubelet[2564]: I0913 00:07:34.768250 2564 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:07:34.768271 kubelet[2564]: I0913 00:07:34.768258 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:34.773147 kubelet[2564]: I0913 00:07:34.771526 2564 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:34.774095 kubelet[2564]: I0913 00:07:34.774073 2564 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:34.783635 kubelet[2564]: I0913 00:07:34.783469 2564 server.go:1274] "Started kubelet" Sep 13 00:07:34.783690 kubelet[2564]: I0913 00:07:34.783639 2564 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:34.784841 kubelet[2564]: I0913 00:07:34.784818 2564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:34.785824 kubelet[2564]: I0913 00:07:34.785805 2564 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:07:34.786710 kubelet[2564]: I0913 00:07:34.786696 2564 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:34.789913 kubelet[2564]: I0913 00:07:34.789873 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:34.792243 kubelet[2564]: I0913 00:07:34.792076 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:34.794390 kubelet[2564]: E0913 00:07:34.794375 2564 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:34.795983 kubelet[2564]: I0913 00:07:34.795651 2564 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:07:34.797208 kubelet[2564]: I0913 00:07:34.797122 2564 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:07:34.797350 kubelet[2564]: I0913 00:07:34.797341 2564 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:34.798169 kubelet[2564]: I0913 00:07:34.798159 2564 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:34.798400 kubelet[2564]: I0913 00:07:34.798332 2564 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:34.801467 kubelet[2564]: I0913 00:07:34.800844 2564 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:34.805477 kubelet[2564]: I0913 00:07:34.805448 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:34.806164 kubelet[2564]: I0913 00:07:34.806149 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:34.806164 kubelet[2564]: I0913 00:07:34.806165 2564 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:07:34.806234 kubelet[2564]: I0913 00:07:34.806182 2564 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:07:34.806234 kubelet[2564]: E0913 00:07:34.806211 2564 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:34.849175 kubelet[2564]: I0913 00:07:34.849063 2564 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:07:34.849175 kubelet[2564]: I0913 00:07:34.849077 2564 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:34.849175 kubelet[2564]: I0913 00:07:34.849091 2564 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:34.849685 kubelet[2564]: I0913 00:07:34.849449 2564 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:07:34.849685 kubelet[2564]: I0913 00:07:34.849462 2564 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:07:34.849685 kubelet[2564]: I0913 00:07:34.849478 2564 policy_none.go:49] "None policy: Start" Sep 13 00:07:34.850758 kubelet[2564]: I0913 00:07:34.850204 2564 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:07:34.850758 kubelet[2564]: I0913 00:07:34.850220 2564 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:34.850758 kubelet[2564]: I0913 00:07:34.850358 2564 state_mem.go:75] "Updated machine memory state" Sep 13 00:07:34.859953 kubelet[2564]: I0913 00:07:34.859939 2564 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:34.861261 kubelet[2564]: I0913 00:07:34.860281 2564 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:34.861261 kubelet[2564]: I0913 00:07:34.861192 2564 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:34.861594 kubelet[2564]: I0913 00:07:34.861529 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:34.916694 kubelet[2564]: E0913 00:07:34.916465 2564 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-5-n-8d584fda4c\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.973058 kubelet[2564]: I0913 00:07:34.973019 2564 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.984779 kubelet[2564]: I0913 00:07:34.984525 2564 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.984779 kubelet[2564]: I0913 00:07:34.984631 2564 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999342 kubelet[2564]: I0913 00:07:34.998994 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999342 kubelet[2564]: I0913 00:07:34.999031 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e99f05ca64c5218a6a9655058be0c94b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-8d584fda4c\" (UID: \"e99f05ca64c5218a6a9655058be0c94b\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999342 kubelet[2564]: I0913 00:07:34.999053 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999342 kubelet[2564]: I0913 00:07:34.999074 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999342 kubelet[2564]: I0913 00:07:34.999092 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999580 kubelet[2564]: I0913 00:07:34.999121 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999580 kubelet[2564]: I0913 00:07:34.999185 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c51f6e080fb54331b24ebb35d9986ba-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" (UID: \"9c51f6e080fb54331b24ebb35d9986ba\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999580 kubelet[2564]: I0913 00:07:34.999236 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:34.999580 kubelet[2564]: I0913 00:07:34.999266 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e735031927ef04ee1bba99632badb72-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-8d584fda4c\" (UID: \"6e735031927ef04ee1bba99632badb72\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:35.770932 kubelet[2564]: I0913 00:07:35.769443 2564 apiserver.go:52] "Watching apiserver" Sep 13 00:07:35.798030 kubelet[2564]: I0913 00:07:35.797964 2564 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:07:35.848148 kubelet[2564]: E0913 00:07:35.846677 2564 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-n-8d584fda4c\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" Sep 13 00:07:35.877916 kubelet[2564]: I0913 00:07:35.877859 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-8d584fda4c" podStartSLOduration=2.877840274 podStartE2EDuration="2.877840274s" podCreationTimestamp="2025-09-13 00:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:35.868617872 +0000 UTC m=+1.173595893" watchObservedRunningTime="2025-09-13 00:07:35.877840274 +0000 UTC m=+1.182818296" Sep 13 00:07:35.885854 kubelet[2564]: I0913 00:07:35.885804 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-8d584fda4c" podStartSLOduration=1.885775263 podStartE2EDuration="1.885775263s" podCreationTimestamp="2025-09-13 00:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:35.87847819 +0000 UTC m=+1.183456213" watchObservedRunningTime="2025-09-13 00:07:35.885775263 +0000 UTC m=+1.190753285" Sep 13 00:07:35.896614 kubelet[2564]: I0913 00:07:35.896543 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-8d584fda4c" podStartSLOduration=1.896528577 podStartE2EDuration="1.896528577s" podCreationTimestamp="2025-09-13 00:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:35.886381801 +0000 UTC m=+1.191359823" watchObservedRunningTime="2025-09-13 00:07:35.896528577 +0000 UTC m=+1.201506599" Sep 13 00:07:38.222421 update_engine[1479]: I20250913 00:07:38.222332 1479 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:38.268244 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2620) Sep 13 00:07:38.336406 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2619) Sep 13 00:07:38.367166 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2619) Sep 13 00:07:39.440565 kubelet[2564]: I0913 00:07:39.440525 2564 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:39.440922 containerd[1508]: time="2025-09-13T00:07:39.440829662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:39.441249 kubelet[2564]: I0913 00:07:39.440983 2564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:40.112097 systemd[1]: Created slice kubepods-besteffort-podb997aa82_bbe5_4918_a03e_f7ace791bd5c.slice - libcontainer container kubepods-besteffort-podb997aa82_bbe5_4918_a03e_f7ace791bd5c.slice. Sep 13 00:07:40.132117 kubelet[2564]: I0913 00:07:40.132071 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh8h\" (UniqueName: \"kubernetes.io/projected/b997aa82-bbe5-4918-a03e-f7ace791bd5c-kube-api-access-hmh8h\") pod \"kube-proxy-fc2wl\" (UID: \"b997aa82-bbe5-4918-a03e-f7ace791bd5c\") " pod="kube-system/kube-proxy-fc2wl" Sep 13 00:07:40.132263 kubelet[2564]: I0913 00:07:40.132152 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b997aa82-bbe5-4918-a03e-f7ace791bd5c-kube-proxy\") pod \"kube-proxy-fc2wl\" (UID: \"b997aa82-bbe5-4918-a03e-f7ace791bd5c\") " pod="kube-system/kube-proxy-fc2wl" Sep 13 00:07:40.132263 kubelet[2564]: I0913 00:07:40.132181 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b997aa82-bbe5-4918-a03e-f7ace791bd5c-xtables-lock\") pod \"kube-proxy-fc2wl\" (UID: \"b997aa82-bbe5-4918-a03e-f7ace791bd5c\") " pod="kube-system/kube-proxy-fc2wl" Sep 13 00:07:40.132263 kubelet[2564]: I0913 00:07:40.132205 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b997aa82-bbe5-4918-a03e-f7ace791bd5c-lib-modules\") pod \"kube-proxy-fc2wl\" (UID: \"b997aa82-bbe5-4918-a03e-f7ace791bd5c\") " pod="kube-system/kube-proxy-fc2wl" Sep 13 00:07:40.423225 containerd[1508]: time="2025-09-13T00:07:40.423085792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fc2wl,Uid:b997aa82-bbe5-4918-a03e-f7ace791bd5c,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:40.452706 containerd[1508]: time="2025-09-13T00:07:40.452477495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:40.452706 containerd[1508]: time="2025-09-13T00:07:40.452526867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:40.452706 containerd[1508]: time="2025-09-13T00:07:40.452538510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:40.452706 containerd[1508]: time="2025-09-13T00:07:40.452610625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:40.470578 systemd[1]: run-containerd-runc-k8s.io-9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449-runc.nhVVLz.mount: Deactivated successfully. Sep 13 00:07:40.479782 systemd[1]: Started cri-containerd-9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449.scope - libcontainer container 9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449. Sep 13 00:07:40.500038 containerd[1508]: time="2025-09-13T00:07:40.499999784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fc2wl,Uid:b997aa82-bbe5-4918-a03e-f7ace791bd5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449\"" Sep 13 00:07:40.502482 containerd[1508]: time="2025-09-13T00:07:40.502378434Z" level=info msg="CreateContainer within sandbox \"9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:40.518351 containerd[1508]: time="2025-09-13T00:07:40.518252910Z" level=info msg="CreateContainer within sandbox \"9c60af4180c21cdc0a44a11efc37cc6225ecb21ca5b487a25b966b9ab276c449\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"847e39c0409a5e8acbef1235694a1e4056f13a44d7b7c9aa3991b2c2f53a1f23\"" Sep 13 00:07:40.519240 containerd[1508]: time="2025-09-13T00:07:40.519000132Z" level=info msg="StartContainer for \"847e39c0409a5e8acbef1235694a1e4056f13a44d7b7c9aa3991b2c2f53a1f23\"" Sep 13 00:07:40.550272 systemd[1]: Started cri-containerd-847e39c0409a5e8acbef1235694a1e4056f13a44d7b7c9aa3991b2c2f53a1f23.scope - libcontainer container 847e39c0409a5e8acbef1235694a1e4056f13a44d7b7c9aa3991b2c2f53a1f23. Sep 13 00:07:40.586653 containerd[1508]: time="2025-09-13T00:07:40.586614385Z" level=info msg="StartContainer for \"847e39c0409a5e8acbef1235694a1e4056f13a44d7b7c9aa3991b2c2f53a1f23\" returns successfully" Sep 13 00:07:40.600890 systemd[1]: Created slice kubepods-besteffort-pod83a21a4f_0580_454f_8542_b6fd48cf90a6.slice - libcontainer container kubepods-besteffort-pod83a21a4f_0580_454f_8542_b6fd48cf90a6.slice. Sep 13 00:07:40.636470 kubelet[2564]: I0913 00:07:40.636428 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83a21a4f-0580-454f-8542-b6fd48cf90a6-var-lib-calico\") pod \"tigera-operator-58fc44c59b-z6nnw\" (UID: \"83a21a4f-0580-454f-8542-b6fd48cf90a6\") " pod="tigera-operator/tigera-operator-58fc44c59b-z6nnw" Sep 13 00:07:40.636470 kubelet[2564]: I0913 00:07:40.636477 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkb5c\" (UniqueName: \"kubernetes.io/projected/83a21a4f-0580-454f-8542-b6fd48cf90a6-kube-api-access-lkb5c\") pod \"tigera-operator-58fc44c59b-z6nnw\" (UID: \"83a21a4f-0580-454f-8542-b6fd48cf90a6\") " pod="tigera-operator/tigera-operator-58fc44c59b-z6nnw" Sep 13 00:07:40.905190 containerd[1508]: time="2025-09-13T00:07:40.905125727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-z6nnw,Uid:83a21a4f-0580-454f-8542-b6fd48cf90a6,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:07:40.928894 containerd[1508]: time="2025-09-13T00:07:40.927739900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:40.928894 containerd[1508]: time="2025-09-13T00:07:40.927786397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:40.928894 containerd[1508]: time="2025-09-13T00:07:40.927797749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:40.928894 containerd[1508]: time="2025-09-13T00:07:40.927878260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:40.944279 systemd[1]: Started cri-containerd-cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75.scope - libcontainer container cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75. Sep 13 00:07:40.987742 containerd[1508]: time="2025-09-13T00:07:40.987684543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-z6nnw,Uid:83a21a4f-0580-454f-8542-b6fd48cf90a6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75\"" Sep 13 00:07:40.989401 containerd[1508]: time="2025-09-13T00:07:40.989371326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:07:42.963863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070050839.mount: Deactivated successfully. Sep 13 00:07:43.041696 kubelet[2564]: I0913 00:07:43.041626 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fc2wl" podStartSLOduration=3.041605795 podStartE2EDuration="3.041605795s" podCreationTimestamp="2025-09-13 00:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:40.861346015 +0000 UTC m=+6.166324047" watchObservedRunningTime="2025-09-13 00:07:43.041605795 +0000 UTC m=+8.346583846" Sep 13 00:07:43.356586 containerd[1508]: time="2025-09-13T00:07:43.356493281Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:43.357982 containerd[1508]: time="2025-09-13T00:07:43.357792386Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:07:43.357982 containerd[1508]: time="2025-09-13T00:07:43.357945583Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:43.359756 containerd[1508]: time="2025-09-13T00:07:43.359723708Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:43.360471 containerd[1508]: time="2025-09-13T00:07:43.360239815Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.3708417s" Sep 13 00:07:43.360471 containerd[1508]: time="2025-09-13T00:07:43.360267106Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:07:43.361640 containerd[1508]: time="2025-09-13T00:07:43.361558058Z" level=info msg="CreateContainer within sandbox \"cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:07:43.377482 containerd[1508]: time="2025-09-13T00:07:43.377448643Z" level=info msg="CreateContainer within sandbox \"cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e\"" Sep 13 00:07:43.380310 containerd[1508]: time="2025-09-13T00:07:43.378179213Z" level=info msg="StartContainer for \"2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e\"" Sep 13 00:07:43.405264 systemd[1]: Started cri-containerd-2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e.scope - libcontainer container 2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e. Sep 13 00:07:43.423189 containerd[1508]: time="2025-09-13T00:07:43.423086619Z" level=info msg="StartContainer for \"2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e\" returns successfully" Sep 13 00:07:43.865666 kubelet[2564]: I0913 00:07:43.865433 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-z6nnw" podStartSLOduration=1.493571322 podStartE2EDuration="3.865415872s" podCreationTimestamp="2025-09-13 00:07:40 +0000 UTC" firstStartedPulling="2025-09-13 00:07:40.988915591 +0000 UTC m=+6.293893613" lastFinishedPulling="2025-09-13 00:07:43.360760141 +0000 UTC m=+8.665738163" observedRunningTime="2025-09-13 00:07:43.865208984 +0000 UTC m=+9.170187016" watchObservedRunningTime="2025-09-13 00:07:43.865415872 +0000 UTC m=+9.170393904" Sep 13 00:07:48.974044 sudo[1719]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:49.131757 sshd[1701]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:49.134765 systemd[1]: sshd@6-65.108.146.26:22-147.75.109.163:45582.service: Deactivated successfully. Sep 13 00:07:49.137379 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:07:49.137691 systemd[1]: session-7.scope: Consumed 3.816s CPU time, 142.5M memory peak, 0B memory swap peak. Sep 13 00:07:49.140899 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:07:49.142768 systemd-logind[1477]: Removed session 7. Sep 13 00:07:51.758914 systemd[1]: Created slice kubepods-besteffort-pod756bd750_e59f_42b4_9067_7dd298be8b40.slice - libcontainer container kubepods-besteffort-pod756bd750_e59f_42b4_9067_7dd298be8b40.slice. Sep 13 00:07:51.810118 kubelet[2564]: I0913 00:07:51.810053 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtt5x\" (UniqueName: \"kubernetes.io/projected/756bd750-e59f-42b4-9067-7dd298be8b40-kube-api-access-mtt5x\") pod \"calico-typha-8454ffb874-t7mf5\" (UID: \"756bd750-e59f-42b4-9067-7dd298be8b40\") " pod="calico-system/calico-typha-8454ffb874-t7mf5" Sep 13 00:07:51.810625 kubelet[2564]: I0913 00:07:51.810150 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/756bd750-e59f-42b4-9067-7dd298be8b40-tigera-ca-bundle\") pod \"calico-typha-8454ffb874-t7mf5\" (UID: \"756bd750-e59f-42b4-9067-7dd298be8b40\") " pod="calico-system/calico-typha-8454ffb874-t7mf5" Sep 13 00:07:51.810625 kubelet[2564]: I0913 00:07:51.810188 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/756bd750-e59f-42b4-9067-7dd298be8b40-typha-certs\") pod \"calico-typha-8454ffb874-t7mf5\" (UID: \"756bd750-e59f-42b4-9067-7dd298be8b40\") " pod="calico-system/calico-typha-8454ffb874-t7mf5" Sep 13 00:07:52.070890 containerd[1508]: time="2025-09-13T00:07:52.070764721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8454ffb874-t7mf5,Uid:756bd750-e59f-42b4-9067-7dd298be8b40,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:52.082220 systemd[1]: Created slice kubepods-besteffort-podc91168ad_cd98_4450_acb9_da7b6a25a99d.slice - libcontainer container kubepods-besteffort-podc91168ad_cd98_4450_acb9_da7b6a25a99d.slice. Sep 13 00:07:52.112118 containerd[1508]: time="2025-09-13T00:07:52.111916521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:52.112118 containerd[1508]: time="2025-09-13T00:07:52.111996720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:52.112920 containerd[1508]: time="2025-09-13T00:07:52.112740009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.112977 kubelet[2564]: I0913 00:07:52.112778 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-lib-modules\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.112977 kubelet[2564]: I0913 00:07:52.112809 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-policysync\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.112977 kubelet[2564]: I0913 00:07:52.112824 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-var-lib-calico\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.112977 kubelet[2564]: I0913 00:07:52.112873 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-xtables-lock\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.112977 kubelet[2564]: I0913 00:07:52.112898 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-flexvol-driver-host\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113342 kubelet[2564]: I0913 00:07:52.112923 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c91168ad-cd98-4450-acb9-da7b6a25a99d-tigera-ca-bundle\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113342 kubelet[2564]: I0913 00:07:52.112942 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbf5\" (UniqueName: \"kubernetes.io/projected/c91168ad-cd98-4450-acb9-da7b6a25a99d-kube-api-access-bxbf5\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113342 kubelet[2564]: I0913 00:07:52.112954 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-cni-bin-dir\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113342 kubelet[2564]: I0913 00:07:52.112966 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c91168ad-cd98-4450-acb9-da7b6a25a99d-node-certs\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113342 kubelet[2564]: I0913 00:07:52.112976 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-var-run-calico\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113861 kubelet[2564]: I0913 00:07:52.112991 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-cni-net-dir\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113861 kubelet[2564]: I0913 00:07:52.113001 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c91168ad-cd98-4450-acb9-da7b6a25a99d-cni-log-dir\") pod \"calico-node-4h5q5\" (UID: \"c91168ad-cd98-4450-acb9-da7b6a25a99d\") " pod="calico-system/calico-node-4h5q5" Sep 13 00:07:52.113937 containerd[1508]: time="2025-09-13T00:07:52.113180722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.136280 systemd[1]: Started cri-containerd-33d3577a21c658863b9129a10c6082d642389fb0f572e23ad7c2c29eddaca4b9.scope - libcontainer container 33d3577a21c658863b9129a10c6082d642389fb0f572e23ad7c2c29eddaca4b9. Sep 13 00:07:52.170185 containerd[1508]: time="2025-09-13T00:07:52.170117073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8454ffb874-t7mf5,Uid:756bd750-e59f-42b4-9067-7dd298be8b40,Namespace:calico-system,Attempt:0,} returns sandbox id \"33d3577a21c658863b9129a10c6082d642389fb0f572e23ad7c2c29eddaca4b9\"" Sep 13 00:07:52.175121 containerd[1508]: time="2025-09-13T00:07:52.174703482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:07:52.221647 kubelet[2564]: E0913 00:07:52.221576 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.221647 kubelet[2564]: W0913 00:07:52.221598 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.221647 kubelet[2564]: E0913 00:07:52.221618 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.223742 kubelet[2564]: E0913 00:07:52.223675 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.223742 kubelet[2564]: W0913 00:07:52.223693 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.223742 kubelet[2564]: E0913 00:07:52.223711 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.369343 kubelet[2564]: E0913 00:07:52.367738 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:07:52.389843 containerd[1508]: time="2025-09-13T00:07:52.389387584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4h5q5,Uid:c91168ad-cd98-4450-acb9-da7b6a25a99d,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:52.404849 kubelet[2564]: E0913 00:07:52.404292 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.404849 kubelet[2564]: W0913 00:07:52.404332 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.404849 kubelet[2564]: E0913 00:07:52.404354 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.404986 kubelet[2564]: E0913 00:07:52.404866 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.404986 kubelet[2564]: W0913 00:07:52.404879 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.404986 kubelet[2564]: E0913 00:07:52.404893 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.405238 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.406069 kubelet[2564]: W0913 00:07:52.405251 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.405262 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.405614 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.406069 kubelet[2564]: W0913 00:07:52.405623 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.405633 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.405985 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.406069 kubelet[2564]: W0913 00:07:52.405995 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.406069 kubelet[2564]: E0913 00:07:52.406004 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.406407 kubelet[2564]: E0913 00:07:52.406310 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.406407 kubelet[2564]: W0913 00:07:52.406323 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.406407 kubelet[2564]: E0913 00:07:52.406332 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.406702 kubelet[2564]: E0913 00:07:52.406682 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.406702 kubelet[2564]: W0913 00:07:52.406700 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.406776 kubelet[2564]: E0913 00:07:52.406709 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.407092 kubelet[2564]: E0913 00:07:52.407069 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.407092 kubelet[2564]: W0913 00:07:52.407081 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.407092 kubelet[2564]: E0913 00:07:52.407089 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.407549 kubelet[2564]: E0913 00:07:52.407471 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.407549 kubelet[2564]: W0913 00:07:52.407482 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.407549 kubelet[2564]: E0913 00:07:52.407492 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.407889 kubelet[2564]: E0913 00:07:52.407785 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.407889 kubelet[2564]: W0913 00:07:52.407797 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.407889 kubelet[2564]: E0913 00:07:52.407805 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.408104 kubelet[2564]: E0913 00:07:52.408089 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.408317 kubelet[2564]: W0913 00:07:52.408106 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.408317 kubelet[2564]: E0913 00:07:52.408117 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.408503 kubelet[2564]: E0913 00:07:52.408486 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.408503 kubelet[2564]: W0913 00:07:52.408500 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.408635 kubelet[2564]: E0913 00:07:52.408509 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.410669 kubelet[2564]: E0913 00:07:52.410639 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.410669 kubelet[2564]: W0913 00:07:52.410655 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.410669 kubelet[2564]: E0913 00:07:52.410663 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.411083 kubelet[2564]: E0913 00:07:52.411065 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.411083 kubelet[2564]: W0913 00:07:52.411077 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.411176 kubelet[2564]: E0913 00:07:52.411086 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.411841 kubelet[2564]: E0913 00:07:52.411822 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.411841 kubelet[2564]: W0913 00:07:52.411835 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.411897 kubelet[2564]: E0913 00:07:52.411843 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.412171 kubelet[2564]: E0913 00:07:52.412113 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.412687 kubelet[2564]: W0913 00:07:52.412658 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.412724 kubelet[2564]: E0913 00:07:52.412681 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.412909 kubelet[2564]: E0913 00:07:52.412889 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.412909 kubelet[2564]: W0913 00:07:52.412901 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.413042 kubelet[2564]: E0913 00:07:52.412911 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.413380 kubelet[2564]: E0913 00:07:52.413122 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.413380 kubelet[2564]: W0913 00:07:52.413302 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.413380 kubelet[2564]: E0913 00:07:52.413321 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.414311 kubelet[2564]: E0913 00:07:52.414290 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.414311 kubelet[2564]: W0913 00:07:52.414304 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.414311 kubelet[2564]: E0913 00:07:52.414313 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.415127 kubelet[2564]: E0913 00:07:52.414762 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.415127 kubelet[2564]: W0913 00:07:52.414770 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.415127 kubelet[2564]: E0913 00:07:52.414778 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.415127 kubelet[2564]: E0913 00:07:52.415531 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.415127 kubelet[2564]: W0913 00:07:52.415539 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.415127 kubelet[2564]: E0913 00:07:52.415548 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.415127 kubelet[2564]: I0913 00:07:52.415572 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32aa0455-0ec7-4086-b5ba-65161853a1e0-varrun\") pod \"csi-node-driver-cs8pq\" (UID: \"32aa0455-0ec7-4086-b5ba-65161853a1e0\") " pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:07:52.417615 kubelet[2564]: E0913 00:07:52.417595 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.417615 kubelet[2564]: W0913 00:07:52.417610 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.417682 kubelet[2564]: E0913 00:07:52.417621 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.417682 kubelet[2564]: I0913 00:07:52.417641 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhht\" (UniqueName: \"kubernetes.io/projected/32aa0455-0ec7-4086-b5ba-65161853a1e0-kube-api-access-9fhht\") pod \"csi-node-driver-cs8pq\" (UID: \"32aa0455-0ec7-4086-b5ba-65161853a1e0\") " pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:07:52.417950 kubelet[2564]: E0913 00:07:52.417929 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.417950 kubelet[2564]: W0913 00:07:52.417943 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.418028 kubelet[2564]: E0913 00:07:52.417955 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.418381 kubelet[2564]: I0913 00:07:52.418337 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32aa0455-0ec7-4086-b5ba-65161853a1e0-kubelet-dir\") pod \"csi-node-driver-cs8pq\" (UID: \"32aa0455-0ec7-4086-b5ba-65161853a1e0\") " pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:07:52.418526 kubelet[2564]: E0913 00:07:52.418500 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.418526 kubelet[2564]: W0913 00:07:52.418514 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.418611 kubelet[2564]: E0913 00:07:52.418540 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.419384 kubelet[2564]: E0913 00:07:52.419364 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.419384 kubelet[2564]: W0913 00:07:52.419378 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.419455 kubelet[2564]: E0913 00:07:52.419398 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.420507 kubelet[2564]: E0913 00:07:52.420489 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.420507 kubelet[2564]: W0913 00:07:52.420503 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.420537 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.420707 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.422331 kubelet[2564]: W0913 00:07:52.420715 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.420779 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.421336 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.422331 kubelet[2564]: W0913 00:07:52.421347 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.421497 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.422331 kubelet[2564]: I0913 00:07:52.421519 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32aa0455-0ec7-4086-b5ba-65161853a1e0-registration-dir\") pod \"csi-node-driver-cs8pq\" (UID: \"32aa0455-0ec7-4086-b5ba-65161853a1e0\") " pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:07:52.422331 kubelet[2564]: E0913 00:07:52.422183 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.423465 kubelet[2564]: W0913 00:07:52.422201 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.423465 kubelet[2564]: E0913 00:07:52.422233 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.423465 kubelet[2564]: E0913 00:07:52.422805 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.423465 kubelet[2564]: W0913 00:07:52.422814 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.423465 kubelet[2564]: E0913 00:07:52.423318 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.423465 kubelet[2564]: E0913 00:07:52.423451 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.423465 kubelet[2564]: W0913 00:07:52.423460 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.423616 kubelet[2564]: E0913 00:07:52.423546 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.423616 kubelet[2564]: I0913 00:07:52.423565 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32aa0455-0ec7-4086-b5ba-65161853a1e0-socket-dir\") pod \"csi-node-driver-cs8pq\" (UID: \"32aa0455-0ec7-4086-b5ba-65161853a1e0\") " pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:07:52.423805 kubelet[2564]: E0913 00:07:52.423775 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.423805 kubelet[2564]: W0913 00:07:52.423789 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.423805 kubelet[2564]: E0913 00:07:52.423800 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425197 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.425622 kubelet[2564]: W0913 00:07:52.425209 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425246 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425422 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.425622 kubelet[2564]: W0913 00:07:52.425430 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425438 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425600 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.425622 kubelet[2564]: W0913 00:07:52.425608 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.425622 kubelet[2564]: E0913 00:07:52.425617 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.434685 containerd[1508]: time="2025-09-13T00:07:52.434317702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:52.434809 containerd[1508]: time="2025-09-13T00:07:52.434734030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:52.434835 containerd[1508]: time="2025-09-13T00:07:52.434802197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.435012 containerd[1508]: time="2025-09-13T00:07:52.434893868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.449292 systemd[1]: Started cri-containerd-946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc.scope - libcontainer container 946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc. Sep 13 00:07:52.524737 containerd[1508]: time="2025-09-13T00:07:52.524527211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4h5q5,Uid:c91168ad-cd98-4450-acb9-da7b6a25a99d,Namespace:calico-system,Attempt:0,} returns sandbox id \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\"" Sep 13 00:07:52.526460 kubelet[2564]: E0913 00:07:52.526437 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.526460 kubelet[2564]: W0913 00:07:52.526454 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.526574 kubelet[2564]: E0913 00:07:52.526473 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.527618 kubelet[2564]: E0913 00:07:52.527598 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.527618 kubelet[2564]: W0913 00:07:52.527612 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.527710 kubelet[2564]: E0913 00:07:52.527626 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.527955 kubelet[2564]: E0913 00:07:52.527938 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.527955 kubelet[2564]: W0913 00:07:52.527951 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.528014 kubelet[2564]: E0913 00:07:52.527983 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.528958 kubelet[2564]: E0913 00:07:52.528938 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.528958 kubelet[2564]: W0913 00:07:52.528950 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529046 kubelet[2564]: E0913 00:07:52.529036 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.529187 kubelet[2564]: E0913 00:07:52.529170 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.529187 kubelet[2564]: W0913 00:07:52.529181 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529280 kubelet[2564]: E0913 00:07:52.529261 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.529355 kubelet[2564]: E0913 00:07:52.529340 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.529355 kubelet[2564]: W0913 00:07:52.529350 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529505 kubelet[2564]: E0913 00:07:52.529433 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.529530 kubelet[2564]: E0913 00:07:52.529508 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.529530 kubelet[2564]: W0913 00:07:52.529514 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529530 kubelet[2564]: E0913 00:07:52.529524 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.529691 kubelet[2564]: E0913 00:07:52.529675 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.529691 kubelet[2564]: W0913 00:07:52.529685 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529744 kubelet[2564]: E0913 00:07:52.529702 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.529859 kubelet[2564]: E0913 00:07:52.529834 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.529859 kubelet[2564]: W0913 00:07:52.529841 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.529896 kubelet[2564]: E0913 00:07:52.529861 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.536415 kubelet[2564]: E0913 00:07:52.536239 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.536415 kubelet[2564]: W0913 00:07:52.536254 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.536722 kubelet[2564]: E0913 00:07:52.536547 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.536722 kubelet[2564]: E0913 00:07:52.536675 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.536722 kubelet[2564]: W0913 00:07:52.536684 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.537180 kubelet[2564]: E0913 00:07:52.536965 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.537334 kubelet[2564]: E0913 00:07:52.537261 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.537607 kubelet[2564]: W0913 00:07:52.537433 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.537607 kubelet[2564]: E0913 00:07:52.537588 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.537881 kubelet[2564]: E0913 00:07:52.537820 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.537881 kubelet[2564]: W0913 00:07:52.537830 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.538005 kubelet[2564]: E0913 00:07:52.537946 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.538127 kubelet[2564]: E0913 00:07:52.538080 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.538127 kubelet[2564]: W0913 00:07:52.538089 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.538329 kubelet[2564]: E0913 00:07:52.538219 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.538406 kubelet[2564]: E0913 00:07:52.538387 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.538406 kubelet[2564]: W0913 00:07:52.538395 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.538591 kubelet[2564]: E0913 00:07:52.538531 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.538757 kubelet[2564]: E0913 00:07:52.538666 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.538757 kubelet[2564]: W0913 00:07:52.538674 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.538757 kubelet[2564]: E0913 00:07:52.538694 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.539017 kubelet[2564]: E0913 00:07:52.538903 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.539017 kubelet[2564]: W0913 00:07:52.538913 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.539017 kubelet[2564]: E0913 00:07:52.538923 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.539216 kubelet[2564]: E0913 00:07:52.539207 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.539303 kubelet[2564]: W0913 00:07:52.539262 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.539377 kubelet[2564]: E0913 00:07:52.539347 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.539560 kubelet[2564]: E0913 00:07:52.539504 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.539560 kubelet[2564]: W0913 00:07:52.539512 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.539705 kubelet[2564]: E0913 00:07:52.539669 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.539808 kubelet[2564]: E0913 00:07:52.539760 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.539808 kubelet[2564]: W0913 00:07:52.539768 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.539905 kubelet[2564]: E0913 00:07:52.539852 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.540101 kubelet[2564]: E0913 00:07:52.540015 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.540101 kubelet[2564]: W0913 00:07:52.540023 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.540353 kubelet[2564]: E0913 00:07:52.540246 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.540353 kubelet[2564]: E0913 00:07:52.540336 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.540353 kubelet[2564]: W0913 00:07:52.540342 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.540555 kubelet[2564]: E0913 00:07:52.540446 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.540712 kubelet[2564]: E0913 00:07:52.540671 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.540712 kubelet[2564]: W0913 00:07:52.540700 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.540853 kubelet[2564]: E0913 00:07:52.540797 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.541300 kubelet[2564]: E0913 00:07:52.541243 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.541300 kubelet[2564]: W0913 00:07:52.541251 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.541300 kubelet[2564]: E0913 00:07:52.541259 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.541990 kubelet[2564]: E0913 00:07:52.541948 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.541990 kubelet[2564]: W0913 00:07:52.541962 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.541990 kubelet[2564]: E0913 00:07:52.541971 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.562360 kubelet[2564]: E0913 00:07:52.562330 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.562360 kubelet[2564]: W0913 00:07:52.562356 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.562497 kubelet[2564]: E0913 00:07:52.562378 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.830194 kubelet[2564]: E0913 00:07:53.830144 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:07:53.862477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153159433.mount: Deactivated successfully. Sep 13 00:07:54.280106 containerd[1508]: time="2025-09-13T00:07:54.280043611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:54.280943 containerd[1508]: time="2025-09-13T00:07:54.280890804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:07:54.281633 containerd[1508]: time="2025-09-13T00:07:54.281576526Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:54.289959 containerd[1508]: time="2025-09-13T00:07:54.289874517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:54.291239 containerd[1508]: time="2025-09-13T00:07:54.290274955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.115529595s" Sep 13 00:07:54.291239 containerd[1508]: time="2025-09-13T00:07:54.290308830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:07:54.294128 containerd[1508]: time="2025-09-13T00:07:54.294056114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:07:54.309887 containerd[1508]: time="2025-09-13T00:07:54.309791757Z" level=info msg="CreateContainer within sandbox \"33d3577a21c658863b9129a10c6082d642389fb0f572e23ad7c2c29eddaca4b9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:07:54.324936 containerd[1508]: time="2025-09-13T00:07:54.324898465Z" level=info msg="CreateContainer within sandbox \"33d3577a21c658863b9129a10c6082d642389fb0f572e23ad7c2c29eddaca4b9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9d6836fcd66ca487046bb5acaa5e98456b2675f8ef549e30a2b794da1adfe8be\"" Sep 13 00:07:54.326533 containerd[1508]: time="2025-09-13T00:07:54.325535576Z" level=info msg="StartContainer for \"9d6836fcd66ca487046bb5acaa5e98456b2675f8ef549e30a2b794da1adfe8be\"" Sep 13 00:07:54.383374 systemd[1]: Started cri-containerd-9d6836fcd66ca487046bb5acaa5e98456b2675f8ef549e30a2b794da1adfe8be.scope - libcontainer container 9d6836fcd66ca487046bb5acaa5e98456b2675f8ef549e30a2b794da1adfe8be. Sep 13 00:07:54.418940 containerd[1508]: time="2025-09-13T00:07:54.418801341Z" level=info msg="StartContainer for \"9d6836fcd66ca487046bb5acaa5e98456b2675f8ef549e30a2b794da1adfe8be\" returns successfully" Sep 13 00:07:54.917110 kubelet[2564]: I0913 00:07:54.915599 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8454ffb874-t7mf5" podStartSLOduration=1.7975999630000001 podStartE2EDuration="3.915580568s" podCreationTimestamp="2025-09-13 00:07:51 +0000 UTC" firstStartedPulling="2025-09-13 00:07:52.173811676 +0000 UTC m=+17.478789699" lastFinishedPulling="2025-09-13 00:07:54.291792282 +0000 UTC m=+19.596770304" observedRunningTime="2025-09-13 00:07:54.915301726 +0000 UTC m=+20.220279777" watchObservedRunningTime="2025-09-13 00:07:54.915580568 +0000 UTC m=+20.220558619" Sep 13 00:07:54.933827 kubelet[2564]: E0913 00:07:54.933755 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.933827 kubelet[2564]: W0913 00:07:54.933821 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.933843 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934043 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.934619 kubelet[2564]: W0913 00:07:54.934054 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934065 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934248 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.934619 kubelet[2564]: W0913 00:07:54.934257 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934266 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934406 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.934619 kubelet[2564]: W0913 00:07:54.934413 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934619 kubelet[2564]: E0913 00:07:54.934422 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934982 kubelet[2564]: E0913 00:07:54.934562 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.934982 kubelet[2564]: W0913 00:07:54.934570 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934982 kubelet[2564]: E0913 00:07:54.934578 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934982 kubelet[2564]: E0913 00:07:54.934739 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.934982 kubelet[2564]: W0913 00:07:54.934751 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.934982 kubelet[2564]: E0913 00:07:54.934763 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.934982 kubelet[2564]: E0913 00:07:54.934977 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.935124 kubelet[2564]: W0913 00:07:54.934991 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.935124 kubelet[2564]: E0913 00:07:54.935009 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.935247 kubelet[2564]: E0913 00:07:54.935217 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.935247 kubelet[2564]: W0913 00:07:54.935234 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.935247 kubelet[2564]: E0913 00:07:54.935242 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.935422 kubelet[2564]: E0913 00:07:54.935397 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.935422 kubelet[2564]: W0913 00:07:54.935412 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.935422 kubelet[2564]: E0913 00:07:54.935420 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.935626 kubelet[2564]: E0913 00:07:54.935597 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.935666 kubelet[2564]: W0913 00:07:54.935624 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.935666 kubelet[2564]: E0913 00:07:54.935638 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.935935 kubelet[2564]: E0913 00:07:54.935904 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.935935 kubelet[2564]: W0913 00:07:54.935930 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.935999 kubelet[2564]: E0913 00:07:54.935943 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.936274 kubelet[2564]: E0913 00:07:54.936249 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.936274 kubelet[2564]: W0913 00:07:54.936264 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.936274 kubelet[2564]: E0913 00:07:54.936273 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.936464 kubelet[2564]: E0913 00:07:54.936439 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.936464 kubelet[2564]: W0913 00:07:54.936454 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.936464 kubelet[2564]: E0913 00:07:54.936462 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.936695 kubelet[2564]: E0913 00:07:54.936667 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.936695 kubelet[2564]: W0913 00:07:54.936689 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.936769 kubelet[2564]: E0913 00:07:54.936701 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.936921 kubelet[2564]: E0913 00:07:54.936893 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.936921 kubelet[2564]: W0913 00:07:54.936914 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.937005 kubelet[2564]: E0913 00:07:54.936928 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.955557 kubelet[2564]: E0913 00:07:54.955514 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.955557 kubelet[2564]: W0913 00:07:54.955536 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.955557 kubelet[2564]: E0913 00:07:54.955556 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.956009 kubelet[2564]: E0913 00:07:54.955852 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.956009 kubelet[2564]: W0913 00:07:54.955865 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.956009 kubelet[2564]: E0913 00:07:54.955879 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.956316 kubelet[2564]: E0913 00:07:54.956278 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.956316 kubelet[2564]: W0913 00:07:54.956289 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.956316 kubelet[2564]: E0913 00:07:54.956305 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.956546 kubelet[2564]: E0913 00:07:54.956521 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.956546 kubelet[2564]: W0913 00:07:54.956536 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.956637 kubelet[2564]: E0913 00:07:54.956549 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.956794 kubelet[2564]: E0913 00:07:54.956781 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.956794 kubelet[2564]: W0913 00:07:54.956791 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.956910 kubelet[2564]: E0913 00:07:54.956803 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.957082 kubelet[2564]: E0913 00:07:54.957056 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.957082 kubelet[2564]: W0913 00:07:54.957072 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.957193 kubelet[2564]: E0913 00:07:54.957178 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.957358 kubelet[2564]: E0913 00:07:54.957327 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.957358 kubelet[2564]: W0913 00:07:54.957346 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.957429 kubelet[2564]: E0913 00:07:54.957420 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.957626 kubelet[2564]: E0913 00:07:54.957600 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.957626 kubelet[2564]: W0913 00:07:54.957617 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.957814 kubelet[2564]: E0913 00:07:54.957747 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.957898 kubelet[2564]: E0913 00:07:54.957838 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.957898 kubelet[2564]: W0913 00:07:54.957846 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.957898 kubelet[2564]: E0913 00:07:54.957864 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.958525 kubelet[2564]: E0913 00:07:54.958224 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.958525 kubelet[2564]: W0913 00:07:54.958234 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.958525 kubelet[2564]: E0913 00:07:54.958247 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.958525 kubelet[2564]: E0913 00:07:54.958449 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.958525 kubelet[2564]: W0913 00:07:54.958457 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.958525 kubelet[2564]: E0913 00:07:54.958470 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.958776 kubelet[2564]: E0913 00:07:54.958652 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.958776 kubelet[2564]: W0913 00:07:54.958660 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.958776 kubelet[2564]: E0913 00:07:54.958668 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.958868 kubelet[2564]: E0913 00:07:54.958846 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.958868 kubelet[2564]: W0913 00:07:54.958859 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.959051 kubelet[2564]: E0913 00:07:54.958942 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.959306 kubelet[2564]: E0913 00:07:54.959291 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.959306 kubelet[2564]: W0913 00:07:54.959303 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.959373 kubelet[2564]: E0913 00:07:54.959317 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.959531 kubelet[2564]: E0913 00:07:54.959509 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.959531 kubelet[2564]: W0913 00:07:54.959528 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.959587 kubelet[2564]: E0913 00:07:54.959539 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.959785 kubelet[2564]: E0913 00:07:54.959728 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.959785 kubelet[2564]: W0913 00:07:54.959741 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.959785 kubelet[2564]: E0913 00:07:54.959754 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.960124 kubelet[2564]: E0913 00:07:54.960096 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.960124 kubelet[2564]: W0913 00:07:54.960114 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.960124 kubelet[2564]: E0913 00:07:54.960127 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:54.960402 kubelet[2564]: E0913 00:07:54.960370 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:54.960402 kubelet[2564]: W0913 00:07:54.960382 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:54.960402 kubelet[2564]: E0913 00:07:54.960390 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.807800 kubelet[2564]: E0913 00:07:55.807463 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:07:55.836536 containerd[1508]: time="2025-09-13T00:07:55.836482632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.837475 containerd[1508]: time="2025-09-13T00:07:55.837411830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:07:55.838409 containerd[1508]: time="2025-09-13T00:07:55.838359431Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.840178 containerd[1508]: time="2025-09-13T00:07:55.840056684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.840946 containerd[1508]: time="2025-09-13T00:07:55.840634685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.546437899s" Sep 13 00:07:55.840946 containerd[1508]: time="2025-09-13T00:07:55.840667366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:07:55.843248 containerd[1508]: time="2025-09-13T00:07:55.843209879Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:07:55.859558 containerd[1508]: time="2025-09-13T00:07:55.859504033Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9\"" Sep 13 00:07:55.862376 containerd[1508]: time="2025-09-13T00:07:55.862331861Z" level=info msg="StartContainer for \"158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9\"" Sep 13 00:07:55.897700 kubelet[2564]: I0913 00:07:55.897675 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:55.899362 systemd[1]: Started cri-containerd-158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9.scope - libcontainer container 158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9. Sep 13 00:07:55.936036 containerd[1508]: time="2025-09-13T00:07:55.935938144Z" level=info msg="StartContainer for \"158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9\" returns successfully" Sep 13 00:07:55.943100 kubelet[2564]: E0913 00:07:55.943068 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.943100 kubelet[2564]: W0913 00:07:55.943092 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.943100 kubelet[2564]: E0913 00:07:55.943110 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.943478 kubelet[2564]: E0913 00:07:55.943412 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.943478 kubelet[2564]: W0913 00:07:55.943425 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.943478 kubelet[2564]: E0913 00:07:55.943435 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.943590 kubelet[2564]: E0913 00:07:55.943565 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.943590 kubelet[2564]: W0913 00:07:55.943579 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.943590 kubelet[2564]: E0913 00:07:55.943587 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.943735 kubelet[2564]: E0913 00:07:55.943714 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.943735 kubelet[2564]: W0913 00:07:55.943727 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.943735 kubelet[2564]: E0913 00:07:55.943734 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.944081 kubelet[2564]: E0913 00:07:55.944045 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.944081 kubelet[2564]: W0913 00:07:55.944061 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.944081 kubelet[2564]: E0913 00:07:55.944070 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.944258 kubelet[2564]: E0913 00:07:55.944231 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.944258 kubelet[2564]: W0913 00:07:55.944245 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.944258 kubelet[2564]: E0913 00:07:55.944252 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944407 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.944848 kubelet[2564]: W0913 00:07:55.944415 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944421 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944626 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.944848 kubelet[2564]: W0913 00:07:55.944634 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944642 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944822 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.944848 kubelet[2564]: W0913 00:07:55.944829 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.944848 kubelet[2564]: E0913 00:07:55.944837 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945257 kubelet[2564]: E0913 00:07:55.945216 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945257 kubelet[2564]: W0913 00:07:55.945231 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945257 kubelet[2564]: E0913 00:07:55.945239 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945417 kubelet[2564]: E0913 00:07:55.945392 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945417 kubelet[2564]: W0913 00:07:55.945406 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945417 kubelet[2564]: E0913 00:07:55.945414 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945553 kubelet[2564]: E0913 00:07:55.945532 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945553 kubelet[2564]: W0913 00:07:55.945545 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945553 kubelet[2564]: E0913 00:07:55.945553 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945686 kubelet[2564]: E0913 00:07:55.945666 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945686 kubelet[2564]: W0913 00:07:55.945680 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945686 kubelet[2564]: E0913 00:07:55.945687 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945827 kubelet[2564]: E0913 00:07:55.945806 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945827 kubelet[2564]: W0913 00:07:55.945819 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945827 kubelet[2564]: E0913 00:07:55.945826 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.945960 kubelet[2564]: E0913 00:07:55.945939 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:55.945960 kubelet[2564]: W0913 00:07:55.945952 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:55.945960 kubelet[2564]: E0913 00:07:55.945959 2564 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:55.947599 systemd[1]: cri-containerd-158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9.scope: Deactivated successfully. Sep 13 00:07:55.968422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9-rootfs.mount: Deactivated successfully. Sep 13 00:07:55.999449 containerd[1508]: time="2025-09-13T00:07:55.977194588Z" level=info msg="shim disconnected" id=158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9 namespace=k8s.io Sep 13 00:07:55.999449 containerd[1508]: time="2025-09-13T00:07:55.998192054Z" level=warning msg="cleaning up after shim disconnected" id=158e292b01245e87fccedfaaaa5fb9e23f2597bbeba4dd67ab695c62518c09f9 namespace=k8s.io Sep 13 00:07:55.999449 containerd[1508]: time="2025-09-13T00:07:55.998207393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:56.902731 containerd[1508]: time="2025-09-13T00:07:56.902467767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:07:57.806830 kubelet[2564]: E0913 00:07:57.806774 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:07:59.502371 containerd[1508]: time="2025-09-13T00:07:59.502319301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:59.503355 containerd[1508]: time="2025-09-13T00:07:59.503171204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:07:59.504995 containerd[1508]: time="2025-09-13T00:07:59.503971301Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:59.506365 containerd[1508]: time="2025-09-13T00:07:59.505661263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:59.506365 containerd[1508]: time="2025-09-13T00:07:59.506261095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.603741492s" Sep 13 00:07:59.506365 containerd[1508]: time="2025-09-13T00:07:59.506291733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:07:59.509383 containerd[1508]: time="2025-09-13T00:07:59.509361125Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:07:59.532536 containerd[1508]: time="2025-09-13T00:07:59.532503412Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8\"" Sep 13 00:07:59.533268 containerd[1508]: time="2025-09-13T00:07:59.533226053Z" level=info msg="StartContainer for \"c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8\"" Sep 13 00:07:59.572362 systemd[1]: Started cri-containerd-c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8.scope - libcontainer container c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8. Sep 13 00:07:59.601400 containerd[1508]: time="2025-09-13T00:07:59.601344042Z" level=info msg="StartContainer for \"c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8\" returns successfully" Sep 13 00:07:59.807257 kubelet[2564]: E0913 00:07:59.807099 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:08:00.001521 systemd[1]: cri-containerd-c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8.scope: Deactivated successfully. Sep 13 00:08:00.021085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8-rootfs.mount: Deactivated successfully. Sep 13 00:08:00.031857 containerd[1508]: time="2025-09-13T00:08:00.031746759Z" level=info msg="shim disconnected" id=c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8 namespace=k8s.io Sep 13 00:08:00.031857 containerd[1508]: time="2025-09-13T00:08:00.031849922Z" level=warning msg="cleaning up after shim disconnected" id=c175e56e98ec3a3cee843590e9a599ccf616424cfbbd1442a8dd210baf3b51d8 namespace=k8s.io Sep 13 00:08:00.032307 containerd[1508]: time="2025-09-13T00:08:00.031863458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:00.065686 kubelet[2564]: I0913 00:08:00.065112 2564 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:08:00.108403 systemd[1]: Created slice kubepods-burstable-pod09981167_486a_4b04_a96e_d7e3bf65b3c6.slice - libcontainer container kubepods-burstable-pod09981167_486a_4b04_a96e_d7e3bf65b3c6.slice. Sep 13 00:08:00.119194 systemd[1]: Created slice kubepods-burstable-pod067bc44d_3d11_4ed9_9172_bf4918269f30.slice - libcontainer container kubepods-burstable-pod067bc44d_3d11_4ed9_9172_bf4918269f30.slice. Sep 13 00:08:00.124389 systemd[1]: Created slice kubepods-besteffort-pod8d773f84_ba61_4193_b743_eaaa25a9f55a.slice - libcontainer container kubepods-besteffort-pod8d773f84_ba61_4193_b743_eaaa25a9f55a.slice. Sep 13 00:08:00.132852 systemd[1]: Created slice kubepods-besteffort-pod3f121a0f_8485_4222_991b_9a5797f41455.slice - libcontainer container kubepods-besteffort-pod3f121a0f_8485_4222_991b_9a5797f41455.slice. Sep 13 00:08:00.140802 systemd[1]: Created slice kubepods-besteffort-poddf321835_3377_45e7_8908_644fb093834e.slice - libcontainer container kubepods-besteffort-poddf321835_3377_45e7_8908_644fb093834e.slice. Sep 13 00:08:00.145929 systemd[1]: Created slice kubepods-besteffort-pod9b37eaa3_aff0_44f7_9179_93bbfd3008e3.slice - libcontainer container kubepods-besteffort-pod9b37eaa3_aff0_44f7_9179_93bbfd3008e3.slice. Sep 13 00:08:00.158875 systemd[1]: Created slice kubepods-besteffort-podc063e060_6c08_4a15_9d6a_d83f8c71099c.slice - libcontainer container kubepods-besteffort-podc063e060_6c08_4a15_9d6a_d83f8c71099c.slice. Sep 13 00:08:00.196099 kubelet[2564]: I0913 00:08:00.196058 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f121a0f-8485-4222-991b-9a5797f41455-whisker-backend-key-pair\") pod \"whisker-6d76f57766-5v2d8\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " pod="calico-system/whisker-6d76f57766-5v2d8" Sep 13 00:08:00.196099 kubelet[2564]: I0913 00:08:00.196099 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8v4\" (UniqueName: \"kubernetes.io/projected/c063e060-6c08-4a15-9d6a-d83f8c71099c-kube-api-access-wd8v4\") pod \"calico-apiserver-7b6ffd985b-9cv9t\" (UID: \"c063e060-6c08-4a15-9d6a-d83f8c71099c\") " pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" Sep 13 00:08:00.196506 kubelet[2564]: I0913 00:08:00.196114 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/df321835-3377-45e7-8908-644fb093834e-goldmane-key-pair\") pod \"goldmane-7988f88666-5x8gx\" (UID: \"df321835-3377-45e7-8908-644fb093834e\") " pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.196506 kubelet[2564]: I0913 00:08:00.196194 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsp7t\" (UniqueName: \"kubernetes.io/projected/09981167-486a-4b04-a96e-d7e3bf65b3c6-kube-api-access-rsp7t\") pod \"coredns-7c65d6cfc9-hn74t\" (UID: \"09981167-486a-4b04-a96e-d7e3bf65b3c6\") " pod="kube-system/coredns-7c65d6cfc9-hn74t" Sep 13 00:08:00.196506 kubelet[2564]: I0913 00:08:00.196214 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/067bc44d-3d11-4ed9-9172-bf4918269f30-config-volume\") pod \"coredns-7c65d6cfc9-vl4w6\" (UID: \"067bc44d-3d11-4ed9-9172-bf4918269f30\") " pod="kube-system/coredns-7c65d6cfc9-vl4w6" Sep 13 00:08:00.196506 kubelet[2564]: I0913 00:08:00.196231 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwgf\" (UniqueName: \"kubernetes.io/projected/3f121a0f-8485-4222-991b-9a5797f41455-kube-api-access-bwwgf\") pod \"whisker-6d76f57766-5v2d8\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " pod="calico-system/whisker-6d76f57766-5v2d8" Sep 13 00:08:00.196506 kubelet[2564]: I0913 00:08:00.196245 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df321835-3377-45e7-8908-644fb093834e-config\") pod \"goldmane-7988f88666-5x8gx\" (UID: \"df321835-3377-45e7-8908-644fb093834e\") " pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.197714 kubelet[2564]: I0913 00:08:00.196256 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df321835-3377-45e7-8908-644fb093834e-goldmane-ca-bundle\") pod \"goldmane-7988f88666-5x8gx\" (UID: \"df321835-3377-45e7-8908-644fb093834e\") " pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.197714 kubelet[2564]: I0913 00:08:00.196273 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5svs\" (UniqueName: \"kubernetes.io/projected/8d773f84-ba61-4193-b743-eaaa25a9f55a-kube-api-access-k5svs\") pod \"calico-kube-controllers-88549f55-jdldt\" (UID: \"8d773f84-ba61-4193-b743-eaaa25a9f55a\") " pod="calico-system/calico-kube-controllers-88549f55-jdldt" Sep 13 00:08:00.197714 kubelet[2564]: I0913 00:08:00.196288 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvkz\" (UniqueName: \"kubernetes.io/projected/df321835-3377-45e7-8908-644fb093834e-kube-api-access-8qvkz\") pod \"goldmane-7988f88666-5x8gx\" (UID: \"df321835-3377-45e7-8908-644fb093834e\") " pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.197714 kubelet[2564]: I0913 00:08:00.196304 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d773f84-ba61-4193-b743-eaaa25a9f55a-tigera-ca-bundle\") pod \"calico-kube-controllers-88549f55-jdldt\" (UID: \"8d773f84-ba61-4193-b743-eaaa25a9f55a\") " pod="calico-system/calico-kube-controllers-88549f55-jdldt" Sep 13 00:08:00.197714 kubelet[2564]: I0913 00:08:00.196321 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtszm\" (UniqueName: \"kubernetes.io/projected/9b37eaa3-aff0-44f7-9179-93bbfd3008e3-kube-api-access-dtszm\") pod \"calico-apiserver-7b6ffd985b-5hsfg\" (UID: \"9b37eaa3-aff0-44f7-9179-93bbfd3008e3\") " pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" Sep 13 00:08:00.197814 kubelet[2564]: I0913 00:08:00.196334 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgbpv\" (UniqueName: \"kubernetes.io/projected/067bc44d-3d11-4ed9-9172-bf4918269f30-kube-api-access-sgbpv\") pod \"coredns-7c65d6cfc9-vl4w6\" (UID: \"067bc44d-3d11-4ed9-9172-bf4918269f30\") " pod="kube-system/coredns-7c65d6cfc9-vl4w6" Sep 13 00:08:00.197814 kubelet[2564]: I0913 00:08:00.196346 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c063e060-6c08-4a15-9d6a-d83f8c71099c-calico-apiserver-certs\") pod \"calico-apiserver-7b6ffd985b-9cv9t\" (UID: \"c063e060-6c08-4a15-9d6a-d83f8c71099c\") " pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" Sep 13 00:08:00.197814 kubelet[2564]: I0913 00:08:00.196362 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09981167-486a-4b04-a96e-d7e3bf65b3c6-config-volume\") pod \"coredns-7c65d6cfc9-hn74t\" (UID: \"09981167-486a-4b04-a96e-d7e3bf65b3c6\") " pod="kube-system/coredns-7c65d6cfc9-hn74t" Sep 13 00:08:00.197814 kubelet[2564]: I0913 00:08:00.196379 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f121a0f-8485-4222-991b-9a5797f41455-whisker-ca-bundle\") pod \"whisker-6d76f57766-5v2d8\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " pod="calico-system/whisker-6d76f57766-5v2d8" Sep 13 00:08:00.197814 kubelet[2564]: I0913 00:08:00.196394 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b37eaa3-aff0-44f7-9179-93bbfd3008e3-calico-apiserver-certs\") pod \"calico-apiserver-7b6ffd985b-5hsfg\" (UID: \"9b37eaa3-aff0-44f7-9179-93bbfd3008e3\") " pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" Sep 13 00:08:00.417486 containerd[1508]: time="2025-09-13T00:08:00.417374713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn74t,Uid:09981167-486a-4b04-a96e-d7e3bf65b3c6,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:00.423448 containerd[1508]: time="2025-09-13T00:08:00.423388446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl4w6,Uid:067bc44d-3d11-4ed9-9172-bf4918269f30,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:00.429748 containerd[1508]: time="2025-09-13T00:08:00.429704484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88549f55-jdldt,Uid:8d773f84-ba61-4193-b743-eaaa25a9f55a,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:00.438080 containerd[1508]: time="2025-09-13T00:08:00.438009223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d76f57766-5v2d8,Uid:3f121a0f-8485-4222-991b-9a5797f41455,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:00.451811 containerd[1508]: time="2025-09-13T00:08:00.451765352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5x8gx,Uid:df321835-3377-45e7-8908-644fb093834e,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:00.456312 containerd[1508]: time="2025-09-13T00:08:00.456273668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-5hsfg,Uid:9b37eaa3-aff0-44f7-9179-93bbfd3008e3,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:08:00.477593 containerd[1508]: time="2025-09-13T00:08:00.477560418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-9cv9t,Uid:c063e060-6c08-4a15-9d6a-d83f8c71099c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:08:00.793651 containerd[1508]: time="2025-09-13T00:08:00.793509305Z" level=error msg="Failed to destroy network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.796729 containerd[1508]: time="2025-09-13T00:08:00.796681070Z" level=error msg="Failed to destroy network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.799054 containerd[1508]: time="2025-09-13T00:08:00.798999678Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.799211 containerd[1508]: time="2025-09-13T00:08:00.799072144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn74t,Uid:09981167-486a-4b04-a96e-d7e3bf65b3c6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.799902 containerd[1508]: time="2025-09-13T00:08:00.799870528Z" level=error msg="Failed to destroy network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.800279 containerd[1508]: time="2025-09-13T00:08:00.800224009Z" level=error msg="encountered an error cleaning up failed sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.800279 containerd[1508]: time="2025-09-13T00:08:00.800266368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-5hsfg,Uid:9b37eaa3-aff0-44f7-9179-93bbfd3008e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.804383382Z" level=error msg="encountered an error cleaning up failed sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.804421262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl4w6,Uid:067bc44d-3d11-4ed9-9172-bf4918269f30,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.804836660Z" level=error msg="Failed to destroy network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.805062562Z" level=error msg="encountered an error cleaning up failed sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.805278276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-9cv9t,Uid:c063e060-6c08-4a15-9d6a-d83f8c71099c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805584 containerd[1508]: time="2025-09-13T00:08:00.805345723Z" level=error msg="Failed to destroy network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805772 kubelet[2564]: E0913 00:08:00.804583 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805772 kubelet[2564]: E0913 00:08:00.804648 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vl4w6" Sep 13 00:08:00.805772 kubelet[2564]: E0913 00:08:00.804666 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vl4w6" Sep 13 00:08:00.805921 kubelet[2564]: E0913 00:08:00.804703 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vl4w6_kube-system(067bc44d-3d11-4ed9-9172-bf4918269f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vl4w6_kube-system(067bc44d-3d11-4ed9-9172-bf4918269f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vl4w6" podUID="067bc44d-3d11-4ed9-9172-bf4918269f30" Sep 13 00:08:00.805921 kubelet[2564]: E0913 00:08:00.805117 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.805921 kubelet[2564]: E0913 00:08:00.805196 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hn74t" Sep 13 00:08:00.806006 containerd[1508]: time="2025-09-13T00:08:00.805817134Z" level=error msg="encountered an error cleaning up failed sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806006 containerd[1508]: time="2025-09-13T00:08:00.805850587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88549f55-jdldt,Uid:8d773f84-ba61-4193-b743-eaaa25a9f55a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806058 kubelet[2564]: E0913 00:08:00.805213 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hn74t" Sep 13 00:08:00.806058 kubelet[2564]: E0913 00:08:00.805435 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hn74t_kube-system(09981167-486a-4b04-a96e-d7e3bf65b3c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hn74t_kube-system(09981167-486a-4b04-a96e-d7e3bf65b3c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hn74t" podUID="09981167-486a-4b04-a96e-d7e3bf65b3c6" Sep 13 00:08:00.806058 kubelet[2564]: E0913 00:08:00.805508 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806162 kubelet[2564]: E0913 00:08:00.805541 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" Sep 13 00:08:00.806162 kubelet[2564]: E0913 00:08:00.805566 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" Sep 13 00:08:00.806162 kubelet[2564]: E0913 00:08:00.805606 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6ffd985b-5hsfg_calico-apiserver(9b37eaa3-aff0-44f7-9179-93bbfd3008e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6ffd985b-5hsfg_calico-apiserver(9b37eaa3-aff0-44f7-9179-93bbfd3008e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" podUID="9b37eaa3-aff0-44f7-9179-93bbfd3008e3" Sep 13 00:08:00.806242 kubelet[2564]: E0913 00:08:00.805963 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806242 kubelet[2564]: E0913 00:08:00.806004 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-88549f55-jdldt" Sep 13 00:08:00.806242 kubelet[2564]: E0913 00:08:00.806017 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-88549f55-jdldt" Sep 13 00:08:00.806887 kubelet[2564]: E0913 00:08:00.806049 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-88549f55-jdldt_calico-system(8d773f84-ba61-4193-b743-eaaa25a9f55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-88549f55-jdldt_calico-system(8d773f84-ba61-4193-b743-eaaa25a9f55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-88549f55-jdldt" podUID="8d773f84-ba61-4193-b743-eaaa25a9f55a" Sep 13 00:08:00.806934 containerd[1508]: time="2025-09-13T00:08:00.806452674Z" level=error msg="Failed to destroy network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806934 containerd[1508]: time="2025-09-13T00:08:00.806599929Z" level=error msg="Failed to destroy network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806934 containerd[1508]: time="2025-09-13T00:08:00.806767283Z" level=error msg="encountered an error cleaning up failed sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.806934 containerd[1508]: time="2025-09-13T00:08:00.806881506Z" level=error msg="encountered an error cleaning up failed sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.807067 containerd[1508]: time="2025-09-13T00:08:00.807046024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d76f57766-5v2d8,Uid:3f121a0f-8485-4222-991b-9a5797f41455,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.807254 containerd[1508]: time="2025-09-13T00:08:00.807177169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5x8gx,Uid:df321835-3377-45e7-8908-644fb093834e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.809083 kubelet[2564]: E0913 00:08:00.809020 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.809083 kubelet[2564]: E0913 00:08:00.809070 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" Sep 13 00:08:00.809083 kubelet[2564]: E0913 00:08:00.809086 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" Sep 13 00:08:00.810206 kubelet[2564]: E0913 00:08:00.809108 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6ffd985b-9cv9t_calico-apiserver(c063e060-6c08-4a15-9d6a-d83f8c71099c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6ffd985b-9cv9t_calico-apiserver(c063e060-6c08-4a15-9d6a-d83f8c71099c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" podUID="c063e060-6c08-4a15-9d6a-d83f8c71099c" Sep 13 00:08:00.810206 kubelet[2564]: E0913 00:08:00.809271 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.810206 kubelet[2564]: E0913 00:08:00.809290 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.810294 kubelet[2564]: E0913 00:08:00.809301 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-5x8gx" Sep 13 00:08:00.810294 kubelet[2564]: E0913 00:08:00.809330 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-5x8gx_calico-system(df321835-3377-45e7-8908-644fb093834e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-5x8gx_calico-system(df321835-3377-45e7-8908-644fb093834e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-5x8gx" podUID="df321835-3377-45e7-8908-644fb093834e" Sep 13 00:08:00.810294 kubelet[2564]: E0913 00:08:00.809353 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.810370 kubelet[2564]: E0913 00:08:00.809406 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d76f57766-5v2d8" Sep 13 00:08:00.810370 kubelet[2564]: E0913 00:08:00.809418 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d76f57766-5v2d8" Sep 13 00:08:00.810370 kubelet[2564]: E0913 00:08:00.809443 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d76f57766-5v2d8_calico-system(3f121a0f-8485-4222-991b-9a5797f41455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d76f57766-5v2d8_calico-system(3f121a0f-8485-4222-991b-9a5797f41455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d76f57766-5v2d8" podUID="3f121a0f-8485-4222-991b-9a5797f41455" Sep 13 00:08:00.915095 kubelet[2564]: I0913 00:08:00.915049 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:00.918213 kubelet[2564]: I0913 00:08:00.918170 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:00.920815 containerd[1508]: time="2025-09-13T00:08:00.920750678Z" level=info msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" Sep 13 00:08:00.924069 containerd[1508]: time="2025-09-13T00:08:00.924022270Z" level=info msg="Ensure that sandbox dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915 in task-service has been cleanup successfully" Sep 13 00:08:00.925954 containerd[1508]: time="2025-09-13T00:08:00.925569806Z" level=info msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" Sep 13 00:08:00.925954 containerd[1508]: time="2025-09-13T00:08:00.925734514Z" level=info msg="Ensure that sandbox 345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717 in task-service has been cleanup successfully" Sep 13 00:08:00.927598 kubelet[2564]: I0913 00:08:00.927567 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:00.928272 containerd[1508]: time="2025-09-13T00:08:00.928232267Z" level=info msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" Sep 13 00:08:00.928767 containerd[1508]: time="2025-09-13T00:08:00.928710913Z" level=info msg="Ensure that sandbox d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0 in task-service has been cleanup successfully" Sep 13 00:08:00.932288 kubelet[2564]: I0913 00:08:00.932225 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:00.934805 containerd[1508]: time="2025-09-13T00:08:00.934192911Z" level=info msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" Sep 13 00:08:00.935560 containerd[1508]: time="2025-09-13T00:08:00.935384390Z" level=info msg="Ensure that sandbox 5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243 in task-service has been cleanup successfully" Sep 13 00:08:00.941512 kubelet[2564]: I0913 00:08:00.941001 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:00.944355 containerd[1508]: time="2025-09-13T00:08:00.944301825Z" level=info msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" Sep 13 00:08:00.947296 containerd[1508]: time="2025-09-13T00:08:00.947233891Z" level=info msg="Ensure that sandbox 6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32 in task-service has been cleanup successfully" Sep 13 00:08:00.958494 containerd[1508]: time="2025-09-13T00:08:00.958267546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:08:00.959785 kubelet[2564]: I0913 00:08:00.959444 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:00.961489 containerd[1508]: time="2025-09-13T00:08:00.959959062Z" level=info msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" Sep 13 00:08:00.961489 containerd[1508]: time="2025-09-13T00:08:00.960093583Z" level=info msg="Ensure that sandbox 2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e in task-service has been cleanup successfully" Sep 13 00:08:00.969703 kubelet[2564]: I0913 00:08:00.969349 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:00.969953 containerd[1508]: time="2025-09-13T00:08:00.969860749Z" level=info msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" Sep 13 00:08:00.970906 containerd[1508]: time="2025-09-13T00:08:00.970889063Z" level=info msg="Ensure that sandbox 45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815 in task-service has been cleanup successfully" Sep 13 00:08:01.005351 containerd[1508]: time="2025-09-13T00:08:01.004456132Z" level=error msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" failed" error="failed to destroy network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.005864 kubelet[2564]: E0913 00:08:01.005836 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:01.006097 kubelet[2564]: E0913 00:08:01.006053 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243"} Sep 13 00:08:01.009378 kubelet[2564]: E0913 00:08:01.008095 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f121a0f-8485-4222-991b-9a5797f41455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.009378 kubelet[2564]: E0913 00:08:01.008164 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f121a0f-8485-4222-991b-9a5797f41455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d76f57766-5v2d8" podUID="3f121a0f-8485-4222-991b-9a5797f41455" Sep 13 00:08:01.022557 containerd[1508]: time="2025-09-13T00:08:01.022495259Z" level=error msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" failed" error="failed to destroy network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.022933 kubelet[2564]: E0913 00:08:01.022808 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:01.022933 kubelet[2564]: E0913 00:08:01.022857 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e"} Sep 13 00:08:01.022933 kubelet[2564]: E0913 00:08:01.022885 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b37eaa3-aff0-44f7-9179-93bbfd3008e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.022933 kubelet[2564]: E0913 00:08:01.022903 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b37eaa3-aff0-44f7-9179-93bbfd3008e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" podUID="9b37eaa3-aff0-44f7-9179-93bbfd3008e3" Sep 13 00:08:01.034441 containerd[1508]: time="2025-09-13T00:08:01.034381744Z" level=error msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" failed" error="failed to destroy network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.034642 kubelet[2564]: E0913 00:08:01.034602 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:01.034719 kubelet[2564]: E0913 00:08:01.034656 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0"} Sep 13 00:08:01.034719 kubelet[2564]: E0913 00:08:01.034687 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c063e060-6c08-4a15-9d6a-d83f8c71099c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.034789 kubelet[2564]: E0913 00:08:01.034711 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c063e060-6c08-4a15-9d6a-d83f8c71099c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" podUID="c063e060-6c08-4a15-9d6a-d83f8c71099c" Sep 13 00:08:01.041411 containerd[1508]: time="2025-09-13T00:08:01.041363449Z" level=error msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" failed" error="failed to destroy network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.041678 kubelet[2564]: E0913 00:08:01.041607 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:01.041737 kubelet[2564]: E0913 00:08:01.041678 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717"} Sep 13 00:08:01.041737 kubelet[2564]: E0913 00:08:01.041708 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"067bc44d-3d11-4ed9-9172-bf4918269f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.041812 kubelet[2564]: E0913 00:08:01.041744 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"067bc44d-3d11-4ed9-9172-bf4918269f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vl4w6" podUID="067bc44d-3d11-4ed9-9172-bf4918269f30" Sep 13 00:08:01.044369 containerd[1508]: time="2025-09-13T00:08:01.044279155Z" level=error msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" failed" error="failed to destroy network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.046001 kubelet[2564]: E0913 00:08:01.044439 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:01.046001 kubelet[2564]: E0913 00:08:01.045226 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915"} Sep 13 00:08:01.046001 kubelet[2564]: E0913 00:08:01.045251 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d773f84-ba61-4193-b743-eaaa25a9f55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.046001 kubelet[2564]: E0913 00:08:01.045289 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d773f84-ba61-4193-b743-eaaa25a9f55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-88549f55-jdldt" podUID="8d773f84-ba61-4193-b743-eaaa25a9f55a" Sep 13 00:08:01.055370 containerd[1508]: time="2025-09-13T00:08:01.055331468Z" level=error msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" failed" error="failed to destroy network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.055525 kubelet[2564]: E0913 00:08:01.055480 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:01.055525 kubelet[2564]: E0913 00:08:01.055518 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815"} Sep 13 00:08:01.056229 kubelet[2564]: E0913 00:08:01.055542 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df321835-3377-45e7-8908-644fb093834e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.056229 kubelet[2564]: E0913 00:08:01.055561 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df321835-3377-45e7-8908-644fb093834e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-5x8gx" podUID="df321835-3377-45e7-8908-644fb093834e" Sep 13 00:08:01.056517 containerd[1508]: time="2025-09-13T00:08:01.056463827Z" level=error msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" failed" error="failed to destroy network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.056623 kubelet[2564]: E0913 00:08:01.056598 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:01.056666 kubelet[2564]: E0913 00:08:01.056630 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32"} Sep 13 00:08:01.056666 kubelet[2564]: E0913 00:08:01.056656 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09981167-486a-4b04-a96e-d7e3bf65b3c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:01.056725 kubelet[2564]: E0913 00:08:01.056673 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09981167-486a-4b04-a96e-d7e3bf65b3c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hn74t" podUID="09981167-486a-4b04-a96e-d7e3bf65b3c6" Sep 13 00:08:01.519469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e-shm.mount: Deactivated successfully. Sep 13 00:08:01.519565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243-shm.mount: Deactivated successfully. Sep 13 00:08:01.519621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915-shm.mount: Deactivated successfully. Sep 13 00:08:01.519672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815-shm.mount: Deactivated successfully. Sep 13 00:08:01.519731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32-shm.mount: Deactivated successfully. Sep 13 00:08:01.519784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717-shm.mount: Deactivated successfully. Sep 13 00:08:01.814465 systemd[1]: Created slice kubepods-besteffort-pod32aa0455_0ec7_4086_b5ba_65161853a1e0.slice - libcontainer container kubepods-besteffort-pod32aa0455_0ec7_4086_b5ba_65161853a1e0.slice. Sep 13 00:08:01.817285 containerd[1508]: time="2025-09-13T00:08:01.816873341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs8pq,Uid:32aa0455-0ec7-4086-b5ba-65161853a1e0,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:01.889589 containerd[1508]: time="2025-09-13T00:08:01.889526772Z" level=error msg="Failed to destroy network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.890711 containerd[1508]: time="2025-09-13T00:08:01.890489935Z" level=error msg="encountered an error cleaning up failed sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.890711 containerd[1508]: time="2025-09-13T00:08:01.890551089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs8pq,Uid:32aa0455-0ec7-4086-b5ba-65161853a1e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.890947 kubelet[2564]: E0913 00:08:01.890912 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:01.892613 kubelet[2564]: E0913 00:08:01.891553 2564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:08:01.892613 kubelet[2564]: E0913 00:08:01.892212 2564 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cs8pq" Sep 13 00:08:01.892613 kubelet[2564]: E0913 00:08:01.892313 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cs8pq_calico-system(32aa0455-0ec7-4086-b5ba-65161853a1e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cs8pq_calico-system(32aa0455-0ec7-4086-b5ba-65161853a1e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:08:01.893041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672-shm.mount: Deactivated successfully. Sep 13 00:08:01.972606 kubelet[2564]: I0913 00:08:01.972561 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:01.974899 containerd[1508]: time="2025-09-13T00:08:01.973296096Z" level=info msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" Sep 13 00:08:01.974899 containerd[1508]: time="2025-09-13T00:08:01.973456535Z" level=info msg="Ensure that sandbox 757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672 in task-service has been cleanup successfully" Sep 13 00:08:02.001891 containerd[1508]: time="2025-09-13T00:08:02.001828250Z" level=error msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" failed" error="failed to destroy network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:02.002170 kubelet[2564]: E0913 00:08:02.002083 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:02.002250 kubelet[2564]: E0913 00:08:02.002205 2564 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672"} Sep 13 00:08:02.002250 kubelet[2564]: E0913 00:08:02.002244 2564 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32aa0455-0ec7-4086-b5ba-65161853a1e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:02.002349 kubelet[2564]: E0913 00:08:02.002268 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32aa0455-0ec7-4086-b5ba-65161853a1e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cs8pq" podUID="32aa0455-0ec7-4086-b5ba-65161853a1e0" Sep 13 00:08:04.652664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380783727.mount: Deactivated successfully. Sep 13 00:08:04.716043 containerd[1508]: time="2025-09-13T00:08:04.709411258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:08:04.718663 containerd[1508]: time="2025-09-13T00:08:04.717698492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 3.759391802s" Sep 13 00:08:04.718663 containerd[1508]: time="2025-09-13T00:08:04.717734029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:08:04.754185 containerd[1508]: time="2025-09-13T00:08:04.753882719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:04.781545 containerd[1508]: time="2025-09-13T00:08:04.780843230Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:08:04.787430 containerd[1508]: time="2025-09-13T00:08:04.787394666Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:04.788369 containerd[1508]: time="2025-09-13T00:08:04.788342700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:04.877779 containerd[1508]: time="2025-09-13T00:08:04.876926191Z" level=info msg="CreateContainer within sandbox \"946036ec3f21a750d411feceb6e2e35dc01b1a050134d6ec8823ad220bbc48fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97db97ee179d5084fa7a508c872baf55c00a93cbaa1c5d4ae0fc1de96ce1db80\"" Sep 13 00:08:04.889695 containerd[1508]: time="2025-09-13T00:08:04.889670748Z" level=info msg="StartContainer for \"97db97ee179d5084fa7a508c872baf55c00a93cbaa1c5d4ae0fc1de96ce1db80\"" Sep 13 00:08:04.968076 systemd[1]: Started cri-containerd-97db97ee179d5084fa7a508c872baf55c00a93cbaa1c5d4ae0fc1de96ce1db80.scope - libcontainer container 97db97ee179d5084fa7a508c872baf55c00a93cbaa1c5d4ae0fc1de96ce1db80. Sep 13 00:08:05.003076 containerd[1508]: time="2025-09-13T00:08:05.003037234Z" level=info msg="StartContainer for \"97db97ee179d5084fa7a508c872baf55c00a93cbaa1c5d4ae0fc1de96ce1db80\" returns successfully" Sep 13 00:08:05.115160 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:08:05.122237 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:08:05.130476 kubelet[2564]: I0913 00:08:05.115770 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4h5q5" podStartSLOduration=0.880069159 podStartE2EDuration="13.076300198s" podCreationTimestamp="2025-09-13 00:07:52 +0000 UTC" firstStartedPulling="2025-09-13 00:07:52.527338885 +0000 UTC m=+17.832316907" lastFinishedPulling="2025-09-13 00:08:04.723569924 +0000 UTC m=+30.028547946" observedRunningTime="2025-09-13 00:08:05.071342905 +0000 UTC m=+30.376320928" watchObservedRunningTime="2025-09-13 00:08:05.076300198 +0000 UTC m=+30.381278219" Sep 13 00:08:05.308947 containerd[1508]: time="2025-09-13T00:08:05.308466713Z" level=info msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.376 [INFO][3821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.379 [INFO][3821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" iface="eth0" netns="/var/run/netns/cni-74c6d767-e720-f89c-9b85-f0f68c736692" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.382 [INFO][3821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" iface="eth0" netns="/var/run/netns/cni-74c6d767-e720-f89c-9b85-f0f68c736692" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.384 [INFO][3821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" iface="eth0" netns="/var/run/netns/cni-74c6d767-e720-f89c-9b85-f0f68c736692" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.384 [INFO][3821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.384 [INFO][3821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.552 [INFO][3828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.555 [INFO][3828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.555 [INFO][3828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.567 [WARNING][3828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.567 [INFO][3828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.569 [INFO][3828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:05.573501 containerd[1508]: 2025-09-13 00:08:05.571 [INFO][3821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:05.573950 containerd[1508]: time="2025-09-13T00:08:05.573513978Z" level=info msg="TearDown network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" successfully" Sep 13 00:08:05.573950 containerd[1508]: time="2025-09-13T00:08:05.573546229Z" level=info msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" returns successfully" Sep 13 00:08:05.590092 kubelet[2564]: I0913 00:08:05.590044 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:05.648411 kubelet[2564]: I0913 00:08:05.648349 2564 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f121a0f-8485-4222-991b-9a5797f41455-whisker-backend-key-pair\") pod \"3f121a0f-8485-4222-991b-9a5797f41455\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " Sep 13 00:08:05.648411 kubelet[2564]: I0913 00:08:05.648398 2564 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwwgf\" (UniqueName: \"kubernetes.io/projected/3f121a0f-8485-4222-991b-9a5797f41455-kube-api-access-bwwgf\") pod \"3f121a0f-8485-4222-991b-9a5797f41455\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " Sep 13 00:08:05.654321 systemd[1]: run-netns-cni\x2d74c6d767\x2de720\x2df89c\x2d9b85\x2df0f68c736692.mount: Deactivated successfully. Sep 13 00:08:05.668577 systemd[1]: var-lib-kubelet-pods-3f121a0f\x2d8485\x2d4222\x2d991b\x2d9a5797f41455-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwwgf.mount: Deactivated successfully. Sep 13 00:08:05.679509 systemd[1]: var-lib-kubelet-pods-3f121a0f\x2d8485\x2d4222\x2d991b\x2d9a5797f41455-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:08:05.684165 kubelet[2564]: I0913 00:08:05.683737 2564 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f121a0f-8485-4222-991b-9a5797f41455-whisker-ca-bundle\") pod \"3f121a0f-8485-4222-991b-9a5797f41455\" (UID: \"3f121a0f-8485-4222-991b-9a5797f41455\") " Sep 13 00:08:05.685167 kubelet[2564]: I0913 00:08:05.682109 2564 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f121a0f-8485-4222-991b-9a5797f41455-kube-api-access-bwwgf" (OuterVolumeSpecName: "kube-api-access-bwwgf") pod "3f121a0f-8485-4222-991b-9a5797f41455" (UID: "3f121a0f-8485-4222-991b-9a5797f41455"). InnerVolumeSpecName "kube-api-access-bwwgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:05.685648 kubelet[2564]: I0913 00:08:05.684258 2564 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f121a0f-8485-4222-991b-9a5797f41455-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3f121a0f-8485-4222-991b-9a5797f41455" (UID: "3f121a0f-8485-4222-991b-9a5797f41455"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:05.685767 kubelet[2564]: I0913 00:08:05.684517 2564 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f121a0f-8485-4222-991b-9a5797f41455-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3f121a0f-8485-4222-991b-9a5797f41455" (UID: "3f121a0f-8485-4222-991b-9a5797f41455"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:05.784940 kubelet[2564]: I0913 00:08:05.784878 2564 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f121a0f-8485-4222-991b-9a5797f41455-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-8d584fda4c\" DevicePath \"\"" Sep 13 00:08:05.784940 kubelet[2564]: I0913 00:08:05.784936 2564 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwwgf\" (UniqueName: \"kubernetes.io/projected/3f121a0f-8485-4222-991b-9a5797f41455-kube-api-access-bwwgf\") on node \"ci-4081-3-5-n-8d584fda4c\" DevicePath \"\"" Sep 13 00:08:05.785120 kubelet[2564]: I0913 00:08:05.784958 2564 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f121a0f-8485-4222-991b-9a5797f41455-whisker-ca-bundle\") on node \"ci-4081-3-5-n-8d584fda4c\" DevicePath \"\"" Sep 13 00:08:06.050814 systemd[1]: Removed slice kubepods-besteffort-pod3f121a0f_8485_4222_991b_9a5797f41455.slice - libcontainer container kubepods-besteffort-pod3f121a0f_8485_4222_991b_9a5797f41455.slice. Sep 13 00:08:06.153453 systemd[1]: Created slice kubepods-besteffort-podf18899d6_04af_44d2_9702_3cd5887a569a.slice - libcontainer container kubepods-besteffort-podf18899d6_04af_44d2_9702_3cd5887a569a.slice. Sep 13 00:08:06.187510 kubelet[2564]: I0913 00:08:06.187368 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f18899d6-04af-44d2-9702-3cd5887a569a-whisker-backend-key-pair\") pod \"whisker-784ccd46cc-hm5jp\" (UID: \"f18899d6-04af-44d2-9702-3cd5887a569a\") " pod="calico-system/whisker-784ccd46cc-hm5jp" Sep 13 00:08:06.187510 kubelet[2564]: I0913 00:08:06.187424 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbd5v\" (UniqueName: \"kubernetes.io/projected/f18899d6-04af-44d2-9702-3cd5887a569a-kube-api-access-nbd5v\") pod \"whisker-784ccd46cc-hm5jp\" (UID: \"f18899d6-04af-44d2-9702-3cd5887a569a\") " pod="calico-system/whisker-784ccd46cc-hm5jp" Sep 13 00:08:06.187510 kubelet[2564]: I0913 00:08:06.187449 2564 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18899d6-04af-44d2-9702-3cd5887a569a-whisker-ca-bundle\") pod \"whisker-784ccd46cc-hm5jp\" (UID: \"f18899d6-04af-44d2-9702-3cd5887a569a\") " pod="calico-system/whisker-784ccd46cc-hm5jp" Sep 13 00:08:06.457286 containerd[1508]: time="2025-09-13T00:08:06.457201253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784ccd46cc-hm5jp,Uid:f18899d6-04af-44d2-9702-3cd5887a569a,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:06.610817 systemd-networkd[1400]: cali7899ace27c8: Link UP Sep 13 00:08:06.611080 systemd-networkd[1400]: cali7899ace27c8: Gained carrier Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.488 [INFO][3873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.498 [INFO][3873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0 whisker-784ccd46cc- calico-system f18899d6-04af-44d2-9702-3cd5887a569a 866 0 2025-09-13 00:08:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784ccd46cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c whisker-784ccd46cc-hm5jp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7899ace27c8 [] [] }} ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.499 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.537 [INFO][3885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" HandleID="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.538 [INFO][3885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" HandleID="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"whisker-784ccd46cc-hm5jp", "timestamp":"2025-09-13 00:08:06.537455502 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.538 [INFO][3885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.539 [INFO][3885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.539 [INFO][3885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.554 [INFO][3885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.563 [INFO][3885] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.568 [INFO][3885] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.569 [INFO][3885] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.571 [INFO][3885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.571 [INFO][3885] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.573 [INFO][3885] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.579 [INFO][3885] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.583 [INFO][3885] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.129/26] block=192.168.86.128/26 handle="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.583 [INFO][3885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.129/26] handle="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.583 [INFO][3885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:06.631224 containerd[1508]: 2025-09-13 00:08:06.583 [INFO][3885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.129/26] IPv6=[] ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" HandleID="k8s-pod-network.beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.586 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0", GenerateName:"whisker-784ccd46cc-", Namespace:"calico-system", SelfLink:"", UID:"f18899d6-04af-44d2-9702-3cd5887a569a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784ccd46cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"whisker-784ccd46cc-hm5jp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7899ace27c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.586 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.129/32] ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.586 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7899ace27c8 ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.607 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.608 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0", GenerateName:"whisker-784ccd46cc-", Namespace:"calico-system", SelfLink:"", UID:"f18899d6-04af-44d2-9702-3cd5887a569a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784ccd46cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead", Pod:"whisker-784ccd46cc-hm5jp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7899ace27c8", MAC:"0a:41:cd:03:4f:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:06.632345 containerd[1508]: 2025-09-13 00:08:06.621 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead" Namespace="calico-system" Pod="whisker-784ccd46cc-hm5jp" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--784ccd46cc--hm5jp-eth0" Sep 13 00:08:06.705095 containerd[1508]: time="2025-09-13T00:08:06.704898842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:06.705095 containerd[1508]: time="2025-09-13T00:08:06.704939749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:06.705095 containerd[1508]: time="2025-09-13T00:08:06.704948506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:06.705095 containerd[1508]: time="2025-09-13T00:08:06.705012885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:06.742297 systemd[1]: Started cri-containerd-beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead.scope - libcontainer container beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead. Sep 13 00:08:06.804045 containerd[1508]: time="2025-09-13T00:08:06.803781149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784ccd46cc-hm5jp,Uid:f18899d6-04af-44d2-9702-3cd5887a569a,Namespace:calico-system,Attempt:0,} returns sandbox id \"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead\"" Sep 13 00:08:06.809885 containerd[1508]: time="2025-09-13T00:08:06.809640370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:08:06.815961 kubelet[2564]: I0913 00:08:06.815916 2564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f121a0f-8485-4222-991b-9a5797f41455" path="/var/lib/kubelet/pods/3f121a0f-8485-4222-991b-9a5797f41455/volumes" Sep 13 00:08:06.992206 kernel: bpftool[4059]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:08:07.220503 systemd-networkd[1400]: vxlan.calico: Link UP Sep 13 00:08:07.220514 systemd-networkd[1400]: vxlan.calico: Gained carrier Sep 13 00:08:08.244705 systemd-networkd[1400]: cali7899ace27c8: Gained IPv6LL Sep 13 00:08:08.312678 containerd[1508]: time="2025-09-13T00:08:08.312632105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.320317 containerd[1508]: time="2025-09-13T00:08:08.320267585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:08:08.321276 containerd[1508]: time="2025-09-13T00:08:08.321221070Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.325012 containerd[1508]: time="2025-09-13T00:08:08.324953891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.515278546s" Sep 13 00:08:08.325012 containerd[1508]: time="2025-09-13T00:08:08.324997633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:08:08.330907 containerd[1508]: time="2025-09-13T00:08:08.330859050Z" level=info msg="CreateContainer within sandbox \"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:08:08.336705 containerd[1508]: time="2025-09-13T00:08:08.336475477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.353954 containerd[1508]: time="2025-09-13T00:08:08.353793940Z" level=info msg="CreateContainer within sandbox \"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d16a8a4f6ca945bd5ec5360738f4917f5598242d2ebfe33d1642ae16647a97e6\"" Sep 13 00:08:08.356310 containerd[1508]: time="2025-09-13T00:08:08.356238758Z" level=info msg="StartContainer for \"d16a8a4f6ca945bd5ec5360738f4917f5598242d2ebfe33d1642ae16647a97e6\"" Sep 13 00:08:08.389288 systemd[1]: Started cri-containerd-d16a8a4f6ca945bd5ec5360738f4917f5598242d2ebfe33d1642ae16647a97e6.scope - libcontainer container d16a8a4f6ca945bd5ec5360738f4917f5598242d2ebfe33d1642ae16647a97e6. Sep 13 00:08:08.429524 containerd[1508]: time="2025-09-13T00:08:08.429480490Z" level=info msg="StartContainer for \"d16a8a4f6ca945bd5ec5360738f4917f5598242d2ebfe33d1642ae16647a97e6\" returns successfully" Sep 13 00:08:08.431831 containerd[1508]: time="2025-09-13T00:08:08.431796888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:08:08.756476 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Sep 13 00:08:10.406063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884022509.mount: Deactivated successfully. Sep 13 00:08:10.431038 containerd[1508]: time="2025-09-13T00:08:10.430987103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:10.432333 containerd[1508]: time="2025-09-13T00:08:10.432126987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:08:10.433626 containerd[1508]: time="2025-09-13T00:08:10.433299303Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:10.436081 containerd[1508]: time="2025-09-13T00:08:10.436052941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:10.437195 containerd[1508]: time="2025-09-13T00:08:10.437124518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.005276073s" Sep 13 00:08:10.437264 containerd[1508]: time="2025-09-13T00:08:10.437198787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:08:10.440098 containerd[1508]: time="2025-09-13T00:08:10.440071728Z" level=info msg="CreateContainer within sandbox \"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:08:10.454083 containerd[1508]: time="2025-09-13T00:08:10.454043456Z" level=info msg="CreateContainer within sandbox \"beda06b4904093fc1737d3a3368d9ec81c54fce43e9d83d1dce36a6b152b3ead\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb\"" Sep 13 00:08:10.456253 containerd[1508]: time="2025-09-13T00:08:10.454868912Z" level=info msg="StartContainer for \"7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb\"" Sep 13 00:08:10.488327 systemd[1]: Started cri-containerd-7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb.scope - libcontainer container 7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb. Sep 13 00:08:10.529119 containerd[1508]: time="2025-09-13T00:08:10.529015800Z" level=info msg="StartContainer for \"7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb\" returns successfully" Sep 13 00:08:11.028350 systemd[1]: run-containerd-runc-k8s.io-7e51f4a2d490ad7307c5c3fad3ba1ed1a8f87569189484d0404c0d5db3a377bb-runc.9AL3A3.mount: Deactivated successfully. Sep 13 00:08:11.807501 containerd[1508]: time="2025-09-13T00:08:11.807189391Z" level=info msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" Sep 13 00:08:11.860609 kubelet[2564]: I0913 00:08:11.860544 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-784ccd46cc-hm5jp" podStartSLOduration=2.230650351 podStartE2EDuration="5.860523152s" podCreationTimestamp="2025-09-13 00:08:06 +0000 UTC" firstStartedPulling="2025-09-13 00:08:06.808377495 +0000 UTC m=+32.113355517" lastFinishedPulling="2025-09-13 00:08:10.438250296 +0000 UTC m=+35.743228318" observedRunningTime="2025-09-13 00:08:11.137836707 +0000 UTC m=+36.442814739" watchObservedRunningTime="2025-09-13 00:08:11.860523152 +0000 UTC m=+37.165501184" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.858 [INFO][4256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.859 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" iface="eth0" netns="/var/run/netns/cni-f870d492-235e-5239-c58f-96df2eb6119c" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.859 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" iface="eth0" netns="/var/run/netns/cni-f870d492-235e-5239-c58f-96df2eb6119c" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.859 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" iface="eth0" netns="/var/run/netns/cni-f870d492-235e-5239-c58f-96df2eb6119c" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.860 [INFO][4256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.860 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.884 [INFO][4263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.884 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.885 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.892 [WARNING][4263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.892 [INFO][4263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.894 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:11.899397 containerd[1508]: 2025-09-13 00:08:11.896 [INFO][4256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:11.901889 containerd[1508]: time="2025-09-13T00:08:11.900240909Z" level=info msg="TearDown network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" successfully" Sep 13 00:08:11.901889 containerd[1508]: time="2025-09-13T00:08:11.900303195Z" level=info msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" returns successfully" Sep 13 00:08:11.902700 systemd[1]: run-netns-cni\x2df870d492\x2d235e\x2d5239\x2dc58f\x2d96df2eb6119c.mount: Deactivated successfully. Sep 13 00:08:11.905202 containerd[1508]: time="2025-09-13T00:08:11.904788808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn74t,Uid:09981167-486a-4b04-a96e-d7e3bf65b3c6,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:12.004617 systemd-networkd[1400]: cali6fe6be658ab: Link UP Sep 13 00:08:12.005639 systemd-networkd[1400]: cali6fe6be658ab: Gained carrier Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.946 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0 coredns-7c65d6cfc9- kube-system 09981167-486a-4b04-a96e-d7e3bf65b3c6 899 0 2025-09-13 00:07:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c coredns-7c65d6cfc9-hn74t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fe6be658ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.946 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.966 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" HandleID="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.966 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" HandleID="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"coredns-7c65d6cfc9-hn74t", "timestamp":"2025-09-13 00:08:11.966407268 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.966 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.966 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.966 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.972 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.977 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.983 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.985 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.987 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.987 [INFO][4281] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.988 [INFO][4281] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251 Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.992 [INFO][4281] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.997 [INFO][4281] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.130/26] block=192.168.86.128/26 handle="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.997 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.130/26] handle="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.997 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.025964 containerd[1508]: 2025-09-13 00:08:11.997 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.130/26] IPv6=[] ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" HandleID="k8s-pod-network.eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.000 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09981167-486a-4b04-a96e-d7e3bf65b3c6", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"coredns-7c65d6cfc9-hn74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe6be658ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.000 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.130/32] ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.000 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fe6be658ab ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.005 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.006 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09981167-486a-4b04-a96e-d7e3bf65b3c6", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251", Pod:"coredns-7c65d6cfc9-hn74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe6be658ab", MAC:"b2:97:d4:4e:3c:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.027038 containerd[1508]: 2025-09-13 00:08:12.021 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hn74t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:12.042449 containerd[1508]: time="2025-09-13T00:08:12.042355835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:12.042449 containerd[1508]: time="2025-09-13T00:08:12.042412922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:12.042637 containerd[1508]: time="2025-09-13T00:08:12.042426147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:12.042637 containerd[1508]: time="2025-09-13T00:08:12.042484747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:12.066257 systemd[1]: Started cri-containerd-eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251.scope - libcontainer container eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251. Sep 13 00:08:12.105204 containerd[1508]: time="2025-09-13T00:08:12.105004232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn74t,Uid:09981167-486a-4b04-a96e-d7e3bf65b3c6,Namespace:kube-system,Attempt:1,} returns sandbox id \"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251\"" Sep 13 00:08:12.110260 containerd[1508]: time="2025-09-13T00:08:12.110122962Z" level=info msg="CreateContainer within sandbox \"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:12.129127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206469039.mount: Deactivated successfully. Sep 13 00:08:12.131279 containerd[1508]: time="2025-09-13T00:08:12.130347306Z" level=info msg="CreateContainer within sandbox \"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b1a003358406c7f09925aa6c139c1b9b93c145a6ea639fda048bbd89bce0c3a\"" Sep 13 00:08:12.133462 containerd[1508]: time="2025-09-13T00:08:12.133008672Z" level=info msg="StartContainer for \"9b1a003358406c7f09925aa6c139c1b9b93c145a6ea639fda048bbd89bce0c3a\"" Sep 13 00:08:12.161313 systemd[1]: Started cri-containerd-9b1a003358406c7f09925aa6c139c1b9b93c145a6ea639fda048bbd89bce0c3a.scope - libcontainer container 9b1a003358406c7f09925aa6c139c1b9b93c145a6ea639fda048bbd89bce0c3a. Sep 13 00:08:12.191083 containerd[1508]: time="2025-09-13T00:08:12.191013040Z" level=info msg="StartContainer for \"9b1a003358406c7f09925aa6c139c1b9b93c145a6ea639fda048bbd89bce0c3a\" returns successfully" Sep 13 00:08:13.152375 kubelet[2564]: I0913 00:08:13.152077 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hn74t" podStartSLOduration=33.152030677 podStartE2EDuration="33.152030677s" podCreationTimestamp="2025-09-13 00:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:13.13590072 +0000 UTC m=+38.440878752" watchObservedRunningTime="2025-09-13 00:08:13.152030677 +0000 UTC m=+38.457008709" Sep 13 00:08:13.808500 containerd[1508]: time="2025-09-13T00:08:13.807893889Z" level=info msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" Sep 13 00:08:13.809683 containerd[1508]: time="2025-09-13T00:08:13.809229211Z" level=info msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" Sep 13 00:08:13.810679 containerd[1508]: time="2025-09-13T00:08:13.810420621Z" level=info msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.882 [INFO][4413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.884 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" iface="eth0" netns="/var/run/netns/cni-5ec3b3a2-30da-a41a-1a6b-573928f6767d" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.884 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" iface="eth0" netns="/var/run/netns/cni-5ec3b3a2-30da-a41a-1a6b-573928f6767d" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.885 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" iface="eth0" netns="/var/run/netns/cni-5ec3b3a2-30da-a41a-1a6b-573928f6767d" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.885 [INFO][4413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.885 [INFO][4413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.938 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.938 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.938 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.949 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.949 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.951 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.958685 containerd[1508]: 2025-09-13 00:08:13.954 [INFO][4413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:13.962678 containerd[1508]: time="2025-09-13T00:08:13.959482852Z" level=info msg="TearDown network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" successfully" Sep 13 00:08:13.962678 containerd[1508]: time="2025-09-13T00:08:13.959522056Z" level=info msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" returns successfully" Sep 13 00:08:13.962507 systemd[1]: run-netns-cni\x2d5ec3b3a2\x2d30da\x2da41a\x2d1a6b\x2d573928f6767d.mount: Deactivated successfully. Sep 13 00:08:13.964684 containerd[1508]: time="2025-09-13T00:08:13.963358844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5x8gx,Uid:df321835-3377-45e7-8908-644fb093834e,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.909 [INFO][4418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.909 [INFO][4418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" iface="eth0" netns="/var/run/netns/cni-2d64ae3a-52d7-b170-20cb-a52f810caa4f" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.910 [INFO][4418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" iface="eth0" netns="/var/run/netns/cni-2d64ae3a-52d7-b170-20cb-a52f810caa4f" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.911 [INFO][4418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" iface="eth0" netns="/var/run/netns/cni-2d64ae3a-52d7-b170-20cb-a52f810caa4f" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.911 [INFO][4418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.911 [INFO][4418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.947 [INFO][4442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.948 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.953 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.958 [WARNING][4442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.958 [INFO][4442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.964 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.969588 containerd[1508]: 2025-09-13 00:08:13.968 [INFO][4418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:13.990881 systemd[1]: run-netns-cni\x2d2d64ae3a\x2d52d7\x2db170\x2d20cb\x2da52f810caa4f.mount: Deactivated successfully. Sep 13 00:08:13.991980 containerd[1508]: time="2025-09-13T00:08:13.991946138Z" level=info msg="TearDown network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" successfully" Sep 13 00:08:13.992057 containerd[1508]: time="2025-09-13T00:08:13.991979711Z" level=info msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" returns successfully" Sep 13 00:08:13.993208 containerd[1508]: time="2025-09-13T00:08:13.993182795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs8pq,Uid:32aa0455-0ec7-4086-b5ba-65161853a1e0,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.901 [INFO][4414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.901 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" iface="eth0" netns="/var/run/netns/cni-1978ff9c-45a6-d10f-3558-f27c6ca84fc9" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.902 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" iface="eth0" netns="/var/run/netns/cni-1978ff9c-45a6-d10f-3558-f27c6ca84fc9" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.902 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" iface="eth0" netns="/var/run/netns/cni-1978ff9c-45a6-d10f-3558-f27c6ca84fc9" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.902 [INFO][4414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.902 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.965 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.965 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.966 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.979 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.979 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.986 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.999748 containerd[1508]: 2025-09-13 00:08:13.995 [INFO][4414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:14.001628 containerd[1508]: time="2025-09-13T00:08:13.999895791Z" level=info msg="TearDown network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" successfully" Sep 13 00:08:14.001628 containerd[1508]: time="2025-09-13T00:08:13.999917723Z" level=info msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" returns successfully" Sep 13 00:08:14.002219 containerd[1508]: time="2025-09-13T00:08:14.002199798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl4w6,Uid:067bc44d-3d11-4ed9-9172-bf4918269f30,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:14.048545 systemd[1]: run-netns-cni\x2d1978ff9c\x2d45a6\x2dd10f\x2d3558\x2df27c6ca84fc9.mount: Deactivated successfully. Sep 13 00:08:14.068597 systemd-networkd[1400]: cali6fe6be658ab: Gained IPv6LL Sep 13 00:08:14.140205 systemd-networkd[1400]: cali9a2f0dfe017: Link UP Sep 13 00:08:14.140358 systemd-networkd[1400]: cali9a2f0dfe017: Gained carrier Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.043 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0 goldmane-7988f88666- calico-system df321835-3377-45e7-8908-644fb093834e 922 0 2025-09-13 00:07:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c goldmane-7988f88666-5x8gx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9a2f0dfe017 [] [] }} ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.044 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.094 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" HandleID="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.094 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" HandleID="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"goldmane-7988f88666-5x8gx", "timestamp":"2025-09-13 00:08:14.094251816 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.094 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.094 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.094 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.103 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.109 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.115 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.116 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.118 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.118 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.120 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.125 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.131 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.131/26] block=192.168.86.128/26 handle="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.131 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.131/26] handle="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.131 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.158190 containerd[1508]: 2025-09-13 00:08:14.131 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.131/26] IPv6=[] ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" HandleID="k8s-pod-network.b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.136 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"df321835-3377-45e7-8908-644fb093834e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"goldmane-7988f88666-5x8gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9a2f0dfe017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.136 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.131/32] ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.136 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a2f0dfe017 ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.138 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.138 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"df321835-3377-45e7-8908-644fb093834e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded", Pod:"goldmane-7988f88666-5x8gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9a2f0dfe017", MAC:"62:61:7a:a6:85:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.160824 containerd[1508]: 2025-09-13 00:08:14.152 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded" Namespace="calico-system" Pod="goldmane-7988f88666-5x8gx" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:14.176226 containerd[1508]: time="2025-09-13T00:08:14.176018264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:14.176226 containerd[1508]: time="2025-09-13T00:08:14.176066745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:14.176226 containerd[1508]: time="2025-09-13T00:08:14.176080620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.177701 containerd[1508]: time="2025-09-13T00:08:14.176525384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.197268 systemd[1]: Started cri-containerd-b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded.scope - libcontainer container b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded. Sep 13 00:08:14.236241 systemd-networkd[1400]: cali97e7e4477d7: Link UP Sep 13 00:08:14.237566 systemd-networkd[1400]: cali97e7e4477d7: Gained carrier Sep 13 00:08:14.255593 containerd[1508]: time="2025-09-13T00:08:14.255511403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5x8gx,Uid:df321835-3377-45e7-8908-644fb093834e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded\"" Sep 13 00:08:14.260254 containerd[1508]: time="2025-09-13T00:08:14.260220446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.072 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0 coredns-7c65d6cfc9- kube-system 067bc44d-3d11-4ed9-9172-bf4918269f30 923 0 2025-09-13 00:07:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c coredns-7c65d6cfc9-vl4w6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97e7e4477d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.072 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.108 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" HandleID="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.109 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" HandleID="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"coredns-7c65d6cfc9-vl4w6", "timestamp":"2025-09-13 00:08:14.108836329 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.110 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.132 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.132 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.204 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.211 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.215 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.217 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.218 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.218 [INFO][4497] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.220 [INFO][4497] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624 Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.223 [INFO][4497] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4497] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.132/26] block=192.168.86.128/26 handle="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.132/26] handle="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.260698 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.132/26] IPv6=[] ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" HandleID="k8s-pod-network.77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.233 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"067bc44d-3d11-4ed9-9172-bf4918269f30", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"coredns-7c65d6cfc9-vl4w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97e7e4477d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.233 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.132/32] ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.233 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97e7e4477d7 ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.235 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.236 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"067bc44d-3d11-4ed9-9172-bf4918269f30", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624", Pod:"coredns-7c65d6cfc9-vl4w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97e7e4477d7", MAC:"c2:dd:d6:46:a1:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.261115 containerd[1508]: 2025-09-13 00:08:14.256 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vl4w6" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:14.278942 containerd[1508]: time="2025-09-13T00:08:14.278412599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:14.278942 containerd[1508]: time="2025-09-13T00:08:14.278795135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:14.278942 containerd[1508]: time="2025-09-13T00:08:14.278806156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.278942 containerd[1508]: time="2025-09-13T00:08:14.278865497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.301270 systemd[1]: Started cri-containerd-77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624.scope - libcontainer container 77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624. Sep 13 00:08:14.347974 systemd-networkd[1400]: calic9f1e67a896: Link UP Sep 13 00:08:14.348999 systemd-networkd[1400]: calic9f1e67a896: Gained carrier Sep 13 00:08:14.352908 containerd[1508]: time="2025-09-13T00:08:14.352759858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl4w6,Uid:067bc44d-3d11-4ed9-9172-bf4918269f30,Namespace:kube-system,Attempt:1,} returns sandbox id \"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624\"" Sep 13 00:08:14.356322 containerd[1508]: time="2025-09-13T00:08:14.356264192Z" level=info msg="CreateContainer within sandbox \"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.081 [INFO][4465] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0 csi-node-driver- calico-system 32aa0455-0ec7-4086-b5ba-65161853a1e0 924 0 2025-09-13 00:07:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c csi-node-driver-cs8pq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9f1e67a896 [] [] }} ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.081 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.117 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" HandleID="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.117 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" HandleID="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"csi-node-driver-cs8pq", "timestamp":"2025-09-13 00:08:14.117223002 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.117 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.229 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.304 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.312 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.317 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.319 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.321 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.321 [INFO][4502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.323 [INFO][4502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.328 [INFO][4502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.338 [INFO][4502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.133/26] block=192.168.86.128/26 handle="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.338 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.133/26] handle="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.338 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.366612 containerd[1508]: 2025-09-13 00:08:14.338 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.133/26] IPv6=[] ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" HandleID="k8s-pod-network.b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.341 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32aa0455-0ec7-4086-b5ba-65161853a1e0", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"csi-node-driver-cs8pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9f1e67a896", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.341 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.133/32] ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.341 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9f1e67a896 ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.352 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.353 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32aa0455-0ec7-4086-b5ba-65161853a1e0", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da", Pod:"csi-node-driver-cs8pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9f1e67a896", MAC:"82:8c:70:ce:9c:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.368236 containerd[1508]: 2025-09-13 00:08:14.363 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da" Namespace="calico-system" Pod="csi-node-driver-cs8pq" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:14.370673 containerd[1508]: time="2025-09-13T00:08:14.370639034Z" level=info msg="CreateContainer within sandbox \"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa570d05e9fde8155a78743bf158aff1d009a6ea818d1989135d766f5c7a2a25\"" Sep 13 00:08:14.371682 containerd[1508]: time="2025-09-13T00:08:14.371655117Z" level=info msg="StartContainer for \"aa570d05e9fde8155a78743bf158aff1d009a6ea818d1989135d766f5c7a2a25\"" Sep 13 00:08:14.392062 containerd[1508]: time="2025-09-13T00:08:14.390768626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:14.392062 containerd[1508]: time="2025-09-13T00:08:14.390878322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:14.392062 containerd[1508]: time="2025-09-13T00:08:14.390912897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.392062 containerd[1508]: time="2025-09-13T00:08:14.391013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.407283 systemd[1]: Started cri-containerd-b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da.scope - libcontainer container b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da. Sep 13 00:08:14.410026 systemd[1]: Started cri-containerd-aa570d05e9fde8155a78743bf158aff1d009a6ea818d1989135d766f5c7a2a25.scope - libcontainer container aa570d05e9fde8155a78743bf158aff1d009a6ea818d1989135d766f5c7a2a25. Sep 13 00:08:14.435953 containerd[1508]: time="2025-09-13T00:08:14.435924725Z" level=info msg="StartContainer for \"aa570d05e9fde8155a78743bf158aff1d009a6ea818d1989135d766f5c7a2a25\" returns successfully" Sep 13 00:08:14.436399 containerd[1508]: time="2025-09-13T00:08:14.436343039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs8pq,Uid:32aa0455-0ec7-4086-b5ba-65161853a1e0,Namespace:calico-system,Attempt:1,} returns sandbox id \"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da\"" Sep 13 00:08:14.827834 containerd[1508]: time="2025-09-13T00:08:14.827431158Z" level=info msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" Sep 13 00:08:14.827834 containerd[1508]: time="2025-09-13T00:08:14.827629539Z" level=info msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.911 [INFO][4714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.911 [INFO][4714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" iface="eth0" netns="/var/run/netns/cni-cfaeb5b4-3f18-317b-f2f7-64efb0cb36a2" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.911 [INFO][4714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" iface="eth0" netns="/var/run/netns/cni-cfaeb5b4-3f18-317b-f2f7-64efb0cb36a2" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.912 [INFO][4714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" iface="eth0" netns="/var/run/netns/cni-cfaeb5b4-3f18-317b-f2f7-64efb0cb36a2" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.912 [INFO][4714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.912 [INFO][4714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.931 [INFO][4733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.933 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.933 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.944 [WARNING][4733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.944 [INFO][4733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.946 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.953606 containerd[1508]: 2025-09-13 00:08:14.952 [INFO][4714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:14.954686 containerd[1508]: time="2025-09-13T00:08:14.954457269Z" level=info msg="TearDown network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" successfully" Sep 13 00:08:14.954686 containerd[1508]: time="2025-09-13T00:08:14.954504608Z" level=info msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" returns successfully" Sep 13 00:08:14.955524 containerd[1508]: time="2025-09-13T00:08:14.955503620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-5hsfg,Uid:9b37eaa3-aff0-44f7-9179-93bbfd3008e3,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.908 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.909 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" iface="eth0" netns="/var/run/netns/cni-ff3e1611-6fe5-06a9-c117-0d3a5f727f70" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.910 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" iface="eth0" netns="/var/run/netns/cni-ff3e1611-6fe5-06a9-c117-0d3a5f727f70" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.910 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" iface="eth0" netns="/var/run/netns/cni-ff3e1611-6fe5-06a9-c117-0d3a5f727f70" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.910 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.910 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.943 [INFO][4731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.944 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.946 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.953 [WARNING][4731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.953 [INFO][4731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.955 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.959295 containerd[1508]: 2025-09-13 00:08:14.957 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:14.960491 containerd[1508]: time="2025-09-13T00:08:14.959421790Z" level=info msg="TearDown network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" successfully" Sep 13 00:08:14.960491 containerd[1508]: time="2025-09-13T00:08:14.959437600Z" level=info msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" returns successfully" Sep 13 00:08:14.960491 containerd[1508]: time="2025-09-13T00:08:14.960000414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88549f55-jdldt,Uid:8d773f84-ba61-4193-b743-eaaa25a9f55a,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:15.053215 systemd[1]: run-netns-cni\x2dcfaeb5b4\x2d3f18\x2d317b\x2df2f7\x2d64efb0cb36a2.mount: Deactivated successfully. Sep 13 00:08:15.053292 systemd[1]: run-netns-cni\x2dff3e1611\x2d6fe5\x2d06a9\x2dc117\x2d0d3a5f727f70.mount: Deactivated successfully. Sep 13 00:08:15.107943 systemd-networkd[1400]: cali63d860469ec: Link UP Sep 13 00:08:15.109596 systemd-networkd[1400]: cali63d860469ec: Gained carrier Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.019 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0 calico-kube-controllers-88549f55- calico-system 8d773f84-ba61-4193-b743-eaaa25a9f55a 942 0 2025-09-13 00:07:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:88549f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c calico-kube-controllers-88549f55-jdldt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali63d860469ec [] [] }} ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.020 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.061 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" HandleID="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.061 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" HandleID="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"calico-kube-controllers-88549f55-jdldt", "timestamp":"2025-09-13 00:08:15.061332685 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.061 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.061 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.061 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.069 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.075 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.081 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.082 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.085 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.085 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.086 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.091 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.134/26] block=192.168.86.128/26 handle="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.134/26] handle="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:15.125253 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.134/26] IPv6=[] ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" HandleID="k8s-pod-network.777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.103 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0", GenerateName:"calico-kube-controllers-88549f55-", Namespace:"calico-system", SelfLink:"", UID:"8d773f84-ba61-4193-b743-eaaa25a9f55a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88549f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"calico-kube-controllers-88549f55-jdldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63d860469ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.104 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.134/32] ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.104 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63d860469ec ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.108 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.109 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0", GenerateName:"calico-kube-controllers-88549f55-", Namespace:"calico-system", SelfLink:"", UID:"8d773f84-ba61-4193-b743-eaaa25a9f55a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88549f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c", Pod:"calico-kube-controllers-88549f55-jdldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63d860469ec", MAC:"0a:79:4d:3c:51:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.126067 containerd[1508]: 2025-09-13 00:08:15.122 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c" Namespace="calico-system" Pod="calico-kube-controllers-88549f55-jdldt" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:15.148949 containerd[1508]: time="2025-09-13T00:08:15.148785697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:15.148949 containerd[1508]: time="2025-09-13T00:08:15.148856670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:15.148949 containerd[1508]: time="2025-09-13T00:08:15.148932992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.149096 containerd[1508]: time="2025-09-13T00:08:15.149030535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.150855 kubelet[2564]: I0913 00:08:15.150592 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vl4w6" podStartSLOduration=35.150574427 podStartE2EDuration="35.150574427s" podCreationTimestamp="2025-09-13 00:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:15.149977039 +0000 UTC m=+40.454955060" watchObservedRunningTime="2025-09-13 00:08:15.150574427 +0000 UTC m=+40.455552449" Sep 13 00:08:15.185522 systemd[1]: Started cri-containerd-777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c.scope - libcontainer container 777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c. Sep 13 00:08:15.243803 containerd[1508]: time="2025-09-13T00:08:15.243757134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88549f55-jdldt,Uid:8d773f84-ba61-4193-b743-eaaa25a9f55a,Namespace:calico-system,Attempt:1,} returns sandbox id \"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c\"" Sep 13 00:08:15.247889 systemd-networkd[1400]: caliae665c68b9b: Link UP Sep 13 00:08:15.249887 systemd-networkd[1400]: caliae665c68b9b: Gained carrier Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.018 [INFO][4749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0 calico-apiserver-7b6ffd985b- calico-apiserver 9b37eaa3-aff0-44f7-9179-93bbfd3008e3 943 0 2025-09-13 00:07:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6ffd985b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c calico-apiserver-7b6ffd985b-5hsfg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae665c68b9b [] [] }} ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.019 [INFO][4749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.062 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" HandleID="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.062 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" HandleID="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"calico-apiserver-7b6ffd985b-5hsfg", "timestamp":"2025-09-13 00:08:15.062083492 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.062 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.099 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.171 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.196 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.206 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.211 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.215 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.215 [INFO][4768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.218 [INFO][4768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1 Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.226 [INFO][4768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.242 [INFO][4768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.135/26] block=192.168.86.128/26 handle="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.242 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.135/26] handle="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.242 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:15.270484 containerd[1508]: 2025-09-13 00:08:15.242 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.135/26] IPv6=[] ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" HandleID="k8s-pod-network.9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.245 [INFO][4749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b37eaa3-aff0-44f7-9179-93bbfd3008e3", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"calico-apiserver-7b6ffd985b-5hsfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae665c68b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.245 [INFO][4749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.135/32] ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.245 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae665c68b9b ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.248 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.250 [INFO][4749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b37eaa3-aff0-44f7-9179-93bbfd3008e3", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1", Pod:"calico-apiserver-7b6ffd985b-5hsfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae665c68b9b", MAC:"4a:9b:d7:0e:f2:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.271832 containerd[1508]: 2025-09-13 00:08:15.265 [INFO][4749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-5hsfg" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:15.308764 containerd[1508]: time="2025-09-13T00:08:15.308476321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:15.308764 containerd[1508]: time="2025-09-13T00:08:15.308564306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:15.308764 containerd[1508]: time="2025-09-13T00:08:15.308586868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.309723 containerd[1508]: time="2025-09-13T00:08:15.309650140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.332237 systemd[1]: Started cri-containerd-9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1.scope - libcontainer container 9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1. Sep 13 00:08:15.348257 systemd-networkd[1400]: cali97e7e4477d7: Gained IPv6LL Sep 13 00:08:15.365603 containerd[1508]: time="2025-09-13T00:08:15.365400995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-5hsfg,Uid:9b37eaa3-aff0-44f7-9179-93bbfd3008e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1\"" Sep 13 00:08:16.050240 systemd[1]: run-containerd-runc-k8s.io-9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1-runc.kGbYnU.mount: Deactivated successfully. Sep 13 00:08:16.052333 systemd-networkd[1400]: calic9f1e67a896: Gained IPv6LL Sep 13 00:08:16.116686 systemd-networkd[1400]: cali9a2f0dfe017: Gained IPv6LL Sep 13 00:08:16.180403 systemd-networkd[1400]: cali63d860469ec: Gained IPv6LL Sep 13 00:08:16.348055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847128400.mount: Deactivated successfully. Sep 13 00:08:16.721492 containerd[1508]: time="2025-09-13T00:08:16.721459417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.722512 containerd[1508]: time="2025-09-13T00:08:16.721953524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:08:16.723312 containerd[1508]: time="2025-09-13T00:08:16.723269530Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.725348 containerd[1508]: time="2025-09-13T00:08:16.725327525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.726537 containerd[1508]: time="2025-09-13T00:08:16.725823154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.465570478s" Sep 13 00:08:16.726537 containerd[1508]: time="2025-09-13T00:08:16.725849533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:08:16.727559 containerd[1508]: time="2025-09-13T00:08:16.726749490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:08:16.729083 containerd[1508]: time="2025-09-13T00:08:16.729053407Z" level=info msg="CreateContainer within sandbox \"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:08:16.744587 containerd[1508]: time="2025-09-13T00:08:16.744553508Z" level=info msg="CreateContainer within sandbox \"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e\"" Sep 13 00:08:16.745160 containerd[1508]: time="2025-09-13T00:08:16.745003562Z" level=info msg="StartContainer for \"17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e\"" Sep 13 00:08:16.756968 systemd-networkd[1400]: caliae665c68b9b: Gained IPv6LL Sep 13 00:08:16.769285 systemd[1]: Started cri-containerd-17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e.scope - libcontainer container 17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e. Sep 13 00:08:16.808272 containerd[1508]: time="2025-09-13T00:08:16.808119441Z" level=info msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" Sep 13 00:08:16.809742 containerd[1508]: time="2025-09-13T00:08:16.809673393Z" level=info msg="StartContainer for \"17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e\" returns successfully" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.876 [INFO][4946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.878 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" iface="eth0" netns="/var/run/netns/cni-15147076-03ef-0347-cfc4-7a80c1ab6184" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.878 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" iface="eth0" netns="/var/run/netns/cni-15147076-03ef-0347-cfc4-7a80c1ab6184" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.879 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" iface="eth0" netns="/var/run/netns/cni-15147076-03ef-0347-cfc4-7a80c1ab6184" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.879 [INFO][4946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.879 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.895 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.895 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.895 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.900 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.900 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.901 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:16.905599 containerd[1508]: 2025-09-13 00:08:16.903 [INFO][4946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:16.906186 containerd[1508]: time="2025-09-13T00:08:16.906016599Z" level=info msg="TearDown network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" successfully" Sep 13 00:08:16.906186 containerd[1508]: time="2025-09-13T00:08:16.906053569Z" level=info msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" returns successfully" Sep 13 00:08:16.906748 containerd[1508]: time="2025-09-13T00:08:16.906712423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-9cv9t,Uid:c063e060-6c08-4a15-9d6a-d83f8c71099c,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:17.015814 systemd-networkd[1400]: cali99a78ba538d: Link UP Sep 13 00:08:17.015962 systemd-networkd[1400]: cali99a78ba538d: Gained carrier Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.946 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0 calico-apiserver-7b6ffd985b- calico-apiserver c063e060-6c08-4a15-9d6a-d83f8c71099c 968 0 2025-09-13 00:07:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6ffd985b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-8d584fda4c calico-apiserver-7b6ffd985b-9cv9t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99a78ba538d [] [] }} ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.947 [INFO][4962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.973 [INFO][4975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" HandleID="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.973 [INFO][4975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" HandleID="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-8d584fda4c", "pod":"calico-apiserver-7b6ffd985b-9cv9t", "timestamp":"2025-09-13 00:08:16.973480645 +0000 UTC"}, Hostname:"ci-4081-3-5-n-8d584fda4c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.973 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.973 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.973 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-8d584fda4c' Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.980 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.985 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.990 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.994 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.996 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.996 [INFO][4975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:16.998 [INFO][4975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89 Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:17.002 [INFO][4975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:17.009 [INFO][4975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.136/26] block=192.168.86.128/26 handle="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:17.009 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.136/26] handle="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" host="ci-4081-3-5-n-8d584fda4c" Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:17.009 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:17.033310 containerd[1508]: 2025-09-13 00:08:17.009 [INFO][4975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.136/26] IPv6=[] ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" HandleID="k8s-pod-network.2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.012 [INFO][4962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c063e060-6c08-4a15-9d6a-d83f8c71099c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"", Pod:"calico-apiserver-7b6ffd985b-9cv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a78ba538d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.012 [INFO][4962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.136/32] ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.012 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99a78ba538d ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.015 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.016 [INFO][4962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c063e060-6c08-4a15-9d6a-d83f8c71099c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89", Pod:"calico-apiserver-7b6ffd985b-9cv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a78ba538d", MAC:"76:56:17:c6:11:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:17.035217 containerd[1508]: 2025-09-13 00:08:17.026 [INFO][4962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89" Namespace="calico-apiserver" Pod="calico-apiserver-7b6ffd985b-9cv9t" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:17.049835 systemd[1]: run-netns-cni\x2d15147076\x2d03ef\x2d0347\x2dcfc4\x2d7a80c1ab6184.mount: Deactivated successfully. Sep 13 00:08:17.096715 containerd[1508]: time="2025-09-13T00:08:17.096598377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:17.096715 containerd[1508]: time="2025-09-13T00:08:17.096659211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:17.096715 containerd[1508]: time="2025-09-13T00:08:17.096672165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:17.097422 containerd[1508]: time="2025-09-13T00:08:17.097329567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:17.143902 systemd[1]: run-containerd-runc-k8s.io-2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89-runc.PJgpxR.mount: Deactivated successfully. Sep 13 00:08:17.153495 systemd[1]: Started cri-containerd-2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89.scope - libcontainer container 2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89. Sep 13 00:08:17.175993 kubelet[2564]: I0913 00:08:17.174809 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-5x8gx" podStartSLOduration=23.706497001 podStartE2EDuration="26.174792395s" podCreationTimestamp="2025-09-13 00:07:51 +0000 UTC" firstStartedPulling="2025-09-13 00:08:14.258327249 +0000 UTC m=+39.563305270" lastFinishedPulling="2025-09-13 00:08:16.726622641 +0000 UTC m=+42.031600664" observedRunningTime="2025-09-13 00:08:17.171441226 +0000 UTC m=+42.476419248" watchObservedRunningTime="2025-09-13 00:08:17.174792395 +0000 UTC m=+42.479770417" Sep 13 00:08:17.226793 containerd[1508]: time="2025-09-13T00:08:17.226365170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6ffd985b-9cv9t,Uid:c063e060-6c08-4a15-9d6a-d83f8c71099c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89\"" Sep 13 00:08:18.048225 systemd[1]: run-containerd-runc-k8s.io-17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e-runc.C4q7Rg.mount: Deactivated successfully. Sep 13 00:08:18.187466 systemd[1]: run-containerd-runc-k8s.io-17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e-runc.0s5KQk.mount: Deactivated successfully. Sep 13 00:08:18.356801 systemd-networkd[1400]: cali99a78ba538d: Gained IPv6LL Sep 13 00:08:18.398168 containerd[1508]: time="2025-09-13T00:08:18.398107280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.399234 containerd[1508]: time="2025-09-13T00:08:18.399022705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:08:18.400858 containerd[1508]: time="2025-09-13T00:08:18.399984898Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.402081 containerd[1508]: time="2025-09-13T00:08:18.401573275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.402081 containerd[1508]: time="2025-09-13T00:08:18.401969296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.675195242s" Sep 13 00:08:18.402081 containerd[1508]: time="2025-09-13T00:08:18.401991318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:08:18.405837 containerd[1508]: time="2025-09-13T00:08:18.405807839Z" level=info msg="CreateContainer within sandbox \"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:08:18.405999 containerd[1508]: time="2025-09-13T00:08:18.405972428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:08:18.424354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251957931.mount: Deactivated successfully. Sep 13 00:08:18.430232 containerd[1508]: time="2025-09-13T00:08:18.430200107Z" level=info msg="CreateContainer within sandbox \"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5f3efda7c838347e3312a23a55f39dae5be70545deebee82e3f40a4a2f283d42\"" Sep 13 00:08:18.431099 containerd[1508]: time="2025-09-13T00:08:18.430610586Z" level=info msg="StartContainer for \"5f3efda7c838347e3312a23a55f39dae5be70545deebee82e3f40a4a2f283d42\"" Sep 13 00:08:18.458428 systemd[1]: Started cri-containerd-5f3efda7c838347e3312a23a55f39dae5be70545deebee82e3f40a4a2f283d42.scope - libcontainer container 5f3efda7c838347e3312a23a55f39dae5be70545deebee82e3f40a4a2f283d42. Sep 13 00:08:18.493295 containerd[1508]: time="2025-09-13T00:08:18.493221690Z" level=info msg="StartContainer for \"5f3efda7c838347e3312a23a55f39dae5be70545deebee82e3f40a4a2f283d42\" returns successfully" Sep 13 00:08:19.197802 systemd[1]: run-containerd-runc-k8s.io-17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e-runc.HgsFqF.mount: Deactivated successfully. Sep 13 00:08:20.712195 containerd[1508]: time="2025-09-13T00:08:20.712114990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.713163 containerd[1508]: time="2025-09-13T00:08:20.713047426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:08:20.713830 containerd[1508]: time="2025-09-13T00:08:20.713795770Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.722668 containerd[1508]: time="2025-09-13T00:08:20.722623172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.723545 containerd[1508]: time="2025-09-13T00:08:20.723181007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.31718252s" Sep 13 00:08:20.723545 containerd[1508]: time="2025-09-13T00:08:20.723222985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:08:20.763513 containerd[1508]: time="2025-09-13T00:08:20.763471542Z" level=info msg="CreateContainer within sandbox \"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:08:20.769666 containerd[1508]: time="2025-09-13T00:08:20.768998960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:20.791421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817236660.mount: Deactivated successfully. Sep 13 00:08:20.792330 containerd[1508]: time="2025-09-13T00:08:20.792093841Z" level=info msg="CreateContainer within sandbox \"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1\"" Sep 13 00:08:20.793016 containerd[1508]: time="2025-09-13T00:08:20.792985973Z" level=info msg="StartContainer for \"860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1\"" Sep 13 00:08:20.823285 systemd[1]: Started cri-containerd-860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1.scope - libcontainer container 860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1. Sep 13 00:08:20.865965 containerd[1508]: time="2025-09-13T00:08:20.865915844Z" level=info msg="StartContainer for \"860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1\" returns successfully" Sep 13 00:08:21.204646 kubelet[2564]: I0913 00:08:21.204472 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-88549f55-jdldt" podStartSLOduration=23.731708618 podStartE2EDuration="29.204451495s" podCreationTimestamp="2025-09-13 00:07:52 +0000 UTC" firstStartedPulling="2025-09-13 00:08:15.251664159 +0000 UTC m=+40.556642180" lastFinishedPulling="2025-09-13 00:08:20.724407035 +0000 UTC m=+46.029385057" observedRunningTime="2025-09-13 00:08:21.203325966 +0000 UTC m=+46.508303998" watchObservedRunningTime="2025-09-13 00:08:21.204451495 +0000 UTC m=+46.509429527" Sep 13 00:08:23.995247 containerd[1508]: time="2025-09-13T00:08:23.995191206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.996370 containerd[1508]: time="2025-09-13T00:08:23.996316986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:08:23.997406 containerd[1508]: time="2025-09-13T00:08:23.997368537Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.999123 containerd[1508]: time="2025-09-13T00:08:23.999085405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.999542 containerd[1508]: time="2025-09-13T00:08:23.999517074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.230490231s" Sep 13 00:08:24.000014 containerd[1508]: time="2025-09-13T00:08:23.999545076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:24.000761 containerd[1508]: time="2025-09-13T00:08:24.000732582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:24.003313 containerd[1508]: time="2025-09-13T00:08:24.003284125Z" level=info msg="CreateContainer within sandbox \"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:24.015060 containerd[1508]: time="2025-09-13T00:08:24.015030600Z" level=info msg="CreateContainer within sandbox \"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e9b5e7f848882235ec81f0a73e2bcd5dffbe846bb8127584f317ec0bfa2f2ed0\"" Sep 13 00:08:24.015901 containerd[1508]: time="2025-09-13T00:08:24.015878409Z" level=info msg="StartContainer for \"e9b5e7f848882235ec81f0a73e2bcd5dffbe846bb8127584f317ec0bfa2f2ed0\"" Sep 13 00:08:24.058268 systemd[1]: Started cri-containerd-e9b5e7f848882235ec81f0a73e2bcd5dffbe846bb8127584f317ec0bfa2f2ed0.scope - libcontainer container e9b5e7f848882235ec81f0a73e2bcd5dffbe846bb8127584f317ec0bfa2f2ed0. Sep 13 00:08:24.103451 containerd[1508]: time="2025-09-13T00:08:24.103403834Z" level=info msg="StartContainer for \"e9b5e7f848882235ec81f0a73e2bcd5dffbe846bb8127584f317ec0bfa2f2ed0\" returns successfully" Sep 13 00:08:24.226993 kubelet[2564]: I0913 00:08:24.226936 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b6ffd985b-5hsfg" podStartSLOduration=26.593809788 podStartE2EDuration="35.226916508s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:08:15.367399017 +0000 UTC m=+40.672377039" lastFinishedPulling="2025-09-13 00:08:24.000505727 +0000 UTC m=+49.305483759" observedRunningTime="2025-09-13 00:08:24.226609473 +0000 UTC m=+49.531587496" watchObservedRunningTime="2025-09-13 00:08:24.226916508 +0000 UTC m=+49.531894540" Sep 13 00:08:24.479220 containerd[1508]: time="2025-09-13T00:08:24.479146922Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:24.480150 containerd[1508]: time="2025-09-13T00:08:24.479937194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:08:24.481961 containerd[1508]: time="2025-09-13T00:08:24.481810254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 481.052415ms" Sep 13 00:08:24.481961 containerd[1508]: time="2025-09-13T00:08:24.481838036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:24.483156 containerd[1508]: time="2025-09-13T00:08:24.483103257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:08:24.486185 containerd[1508]: time="2025-09-13T00:08:24.485594257Z" level=info msg="CreateContainer within sandbox \"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:24.501373 containerd[1508]: time="2025-09-13T00:08:24.501309531Z" level=info msg="CreateContainer within sandbox \"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86430048d5eed3c5b16381e1c454572233f30534d54a008fb5abcec528118c9c\"" Sep 13 00:08:24.501798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027897131.mount: Deactivated successfully. Sep 13 00:08:24.504852 containerd[1508]: time="2025-09-13T00:08:24.503341469Z" level=info msg="StartContainer for \"86430048d5eed3c5b16381e1c454572233f30534d54a008fb5abcec528118c9c\"" Sep 13 00:08:24.536329 systemd[1]: Started cri-containerd-86430048d5eed3c5b16381e1c454572233f30534d54a008fb5abcec528118c9c.scope - libcontainer container 86430048d5eed3c5b16381e1c454572233f30534d54a008fb5abcec528118c9c. Sep 13 00:08:24.616194 containerd[1508]: time="2025-09-13T00:08:24.615788012Z" level=info msg="StartContainer for \"86430048d5eed3c5b16381e1c454572233f30534d54a008fb5abcec528118c9c\" returns successfully" Sep 13 00:08:25.217661 kubelet[2564]: I0913 00:08:25.217629 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:25.226759 kubelet[2564]: I0913 00:08:25.226600 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b6ffd985b-9cv9t" podStartSLOduration=28.971190014 podStartE2EDuration="36.226569465s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:08:17.227445976 +0000 UTC m=+42.532423998" lastFinishedPulling="2025-09-13 00:08:24.482825427 +0000 UTC m=+49.787803449" observedRunningTime="2025-09-13 00:08:25.222924202 +0000 UTC m=+50.527902224" watchObservedRunningTime="2025-09-13 00:08:25.226569465 +0000 UTC m=+50.531547487" Sep 13 00:08:26.385563 containerd[1508]: time="2025-09-13T00:08:26.385513221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.386465 containerd[1508]: time="2025-09-13T00:08:26.386431432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:08:26.387727 containerd[1508]: time="2025-09-13T00:08:26.387664514Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.389452 containerd[1508]: time="2025-09-13T00:08:26.389416867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.390074 containerd[1508]: time="2025-09-13T00:08:26.389739863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.906490591s" Sep 13 00:08:26.390074 containerd[1508]: time="2025-09-13T00:08:26.389766253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:08:26.395909 containerd[1508]: time="2025-09-13T00:08:26.395840669Z" level=info msg="CreateContainer within sandbox \"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:08:26.409236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749582427.mount: Deactivated successfully. Sep 13 00:08:26.411348 containerd[1508]: time="2025-09-13T00:08:26.409661043Z" level=info msg="CreateContainer within sandbox \"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c4d55c513e508afc6ac312f80b2141cd538a89556edb1513de04ed22d599432\"" Sep 13 00:08:26.411348 containerd[1508]: time="2025-09-13T00:08:26.410290533Z" level=info msg="StartContainer for \"3c4d55c513e508afc6ac312f80b2141cd538a89556edb1513de04ed22d599432\"" Sep 13 00:08:26.452256 systemd[1]: Started cri-containerd-3c4d55c513e508afc6ac312f80b2141cd538a89556edb1513de04ed22d599432.scope - libcontainer container 3c4d55c513e508afc6ac312f80b2141cd538a89556edb1513de04ed22d599432. Sep 13 00:08:26.476378 containerd[1508]: time="2025-09-13T00:08:26.476343520Z" level=info msg="StartContainer for \"3c4d55c513e508afc6ac312f80b2141cd538a89556edb1513de04ed22d599432\" returns successfully" Sep 13 00:08:27.032281 kubelet[2564]: I0913 00:08:27.027221 2564 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:08:27.033624 kubelet[2564]: I0913 00:08:27.033253 2564 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:08:34.962778 containerd[1508]: time="2025-09-13T00:08:34.961781585Z" level=info msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.234 [WARNING][5385] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.236 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.236 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" iface="eth0" netns="" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.236 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.236 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.410 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.412 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.413 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.425 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.425 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.427 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.432285 containerd[1508]: 2025-09-13 00:08:35.429 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.442073 containerd[1508]: time="2025-09-13T00:08:35.442029478Z" level=info msg="TearDown network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" successfully" Sep 13 00:08:35.442510 containerd[1508]: time="2025-09-13T00:08:35.442185130Z" level=info msg="StopPodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" returns successfully" Sep 13 00:08:35.565455 containerd[1508]: time="2025-09-13T00:08:35.565402484Z" level=info msg="RemovePodSandbox for \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" Sep 13 00:08:35.565455 containerd[1508]: time="2025-09-13T00:08:35.565463368Z" level=info msg="Forcibly stopping sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\"" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.638 [WARNING][5406] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" WorkloadEndpoint="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.644 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.644 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" iface="eth0" netns="" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.644 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.644 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.693 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.693 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.693 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.699 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.699 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" HandleID="k8s-pod-network.5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Workload="ci--4081--3--5--n--8d584fda4c-k8s-whisker--6d76f57766--5v2d8-eth0" Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.701 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.712278 containerd[1508]: 2025-09-13 00:08:35.704 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243" Sep 13 00:08:35.714656 containerd[1508]: time="2025-09-13T00:08:35.712290733Z" level=info msg="TearDown network for sandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" successfully" Sep 13 00:08:35.740292 containerd[1508]: time="2025-09-13T00:08:35.740087473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:35.781004 containerd[1508]: time="2025-09-13T00:08:35.780877913Z" level=info msg="RemovePodSandbox \"5246518ec44d1211cdb3c9f07625c973e0a914a1b8486203e8cfa1930676e243\" returns successfully" Sep 13 00:08:35.809354 containerd[1508]: time="2025-09-13T00:08:35.808963123Z" level=info msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.854 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0", GenerateName:"calico-kube-controllers-88549f55-", Namespace:"calico-system", SelfLink:"", UID:"8d773f84-ba61-4193-b743-eaaa25a9f55a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88549f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c", Pod:"calico-kube-controllers-88549f55-jdldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63d860469ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.855 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.855 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" iface="eth0" netns="" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.855 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.855 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.888 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.888 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.888 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.893 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.893 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.895 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.898935 containerd[1508]: 2025-09-13 00:08:35.897 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.900046 containerd[1508]: time="2025-09-13T00:08:35.898985146Z" level=info msg="TearDown network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" successfully" Sep 13 00:08:35.900046 containerd[1508]: time="2025-09-13T00:08:35.899009542Z" level=info msg="StopPodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" returns successfully" Sep 13 00:08:35.900046 containerd[1508]: time="2025-09-13T00:08:35.899537291Z" level=info msg="RemovePodSandbox for \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" Sep 13 00:08:35.900046 containerd[1508]: time="2025-09-13T00:08:35.899559232Z" level=info msg="Forcibly stopping sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\"" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.935 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0", GenerateName:"calico-kube-controllers-88549f55-", Namespace:"calico-system", SelfLink:"", UID:"8d773f84-ba61-4193-b743-eaaa25a9f55a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88549f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"777c86b726a2af42c7cbba809edca1c47daab970bf8460e565d92aa1574bb76c", Pod:"calico-kube-controllers-88549f55-jdldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63d860469ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.936 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.936 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" iface="eth0" netns="" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.936 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.936 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.965 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.965 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.965 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.971 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.971 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" HandleID="k8s-pod-network.dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--kube--controllers--88549f55--jdldt-eth0" Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.972 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.977639 containerd[1508]: 2025-09-13 00:08:35.975 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915" Sep 13 00:08:35.979228 containerd[1508]: time="2025-09-13T00:08:35.978058975Z" level=info msg="TearDown network for sandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" successfully" Sep 13 00:08:35.982992 containerd[1508]: time="2025-09-13T00:08:35.982776489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:35.982992 containerd[1508]: time="2025-09-13T00:08:35.982861708Z" level=info msg="RemovePodSandbox \"dff4380912108c2547e6a89bba7208a2efb0b4412d8765f1e4b6fb42ed33b915\" returns successfully" Sep 13 00:08:35.983510 containerd[1508]: time="2025-09-13T00:08:35.983416899Z" level=info msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.035 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32aa0455-0ec7-4086-b5ba-65161853a1e0", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da", Pod:"csi-node-driver-cs8pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9f1e67a896", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.036 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.036 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" iface="eth0" netns="" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.036 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.036 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.083 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.083 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.084 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.093 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.093 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.094 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.102660 containerd[1508]: 2025-09-13 00:08:36.098 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.105331 containerd[1508]: time="2025-09-13T00:08:36.102708276Z" level=info msg="TearDown network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" successfully" Sep 13 00:08:36.105331 containerd[1508]: time="2025-09-13T00:08:36.102752318Z" level=info msg="StopPodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" returns successfully" Sep 13 00:08:36.163652 containerd[1508]: time="2025-09-13T00:08:36.163366362Z" level=info msg="RemovePodSandbox for \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" Sep 13 00:08:36.168947 containerd[1508]: time="2025-09-13T00:08:36.168185176Z" level=info msg="Forcibly stopping sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\"" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.221 [WARNING][5494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32aa0455-0ec7-4086-b5ba-65161853a1e0", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b6b6c40181e8699f73b1bf3ca74eb62a3cce2102308620f3500c5982bc5b49da", Pod:"csi-node-driver-cs8pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9f1e67a896", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.221 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.221 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" iface="eth0" netns="" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.221 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.221 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.258 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.259 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.260 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.274 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.275 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" HandleID="k8s-pod-network.757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Workload="ci--4081--3--5--n--8d584fda4c-k8s-csi--node--driver--cs8pq-eth0" Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.283 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.287339 containerd[1508]: 2025-09-13 00:08:36.284 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672" Sep 13 00:08:36.288094 containerd[1508]: time="2025-09-13T00:08:36.287323075Z" level=info msg="TearDown network for sandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" successfully" Sep 13 00:08:36.307654 containerd[1508]: time="2025-09-13T00:08:36.307615598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:36.307741 containerd[1508]: time="2025-09-13T00:08:36.307676792Z" level=info msg="RemovePodSandbox \"757cf2804f7809b90c00c961d3fbb7fdadbf0aa9617373f2325d3fd1fbcd0672\" returns successfully" Sep 13 00:08:36.311197 containerd[1508]: time="2025-09-13T00:08:36.309506823Z" level=info msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.379 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"067bc44d-3d11-4ed9-9172-bf4918269f30", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624", Pod:"coredns-7c65d6cfc9-vl4w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97e7e4477d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.380 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.380 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" iface="eth0" netns="" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.380 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.380 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.428 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.429 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.429 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.435 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.435 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.439 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.447289 containerd[1508]: 2025-09-13 00:08:36.442 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.448310 containerd[1508]: time="2025-09-13T00:08:36.447550254Z" level=info msg="TearDown network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" successfully" Sep 13 00:08:36.448310 containerd[1508]: time="2025-09-13T00:08:36.447573748Z" level=info msg="StopPodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" returns successfully" Sep 13 00:08:36.448779 containerd[1508]: time="2025-09-13T00:08:36.448451234Z" level=info msg="RemovePodSandbox for \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" Sep 13 00:08:36.448779 containerd[1508]: time="2025-09-13T00:08:36.448479297Z" level=info msg="Forcibly stopping sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\"" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.489 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"067bc44d-3d11-4ed9-9172-bf4918269f30", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"77af6ba2b545072fdf9d39769b9cde24a9a698a1c277b30935dc0c4c59035624", Pod:"coredns-7c65d6cfc9-vl4w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97e7e4477d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.489 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.489 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" iface="eth0" netns="" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.489 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.489 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.508 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.508 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.508 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.513 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.513 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" HandleID="k8s-pod-network.345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--vl4w6-eth0" Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.515 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.519912 containerd[1508]: 2025-09-13 00:08:36.517 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717" Sep 13 00:08:36.524956 containerd[1508]: time="2025-09-13T00:08:36.520255667Z" level=info msg="TearDown network for sandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" successfully" Sep 13 00:08:36.533692 containerd[1508]: time="2025-09-13T00:08:36.533667650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:36.533848 containerd[1508]: time="2025-09-13T00:08:36.533831127Z" level=info msg="RemovePodSandbox \"345ef44fec3b150fd7491d7ed4d2a29af46147311c7248bfc3909aa55f48e717\" returns successfully" Sep 13 00:08:36.534415 containerd[1508]: time="2025-09-13T00:08:36.534397168Z" level=info msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.582 [WARNING][5559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"df321835-3377-45e7-8908-644fb093834e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded", Pod:"goldmane-7988f88666-5x8gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9a2f0dfe017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.582 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.583 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" iface="eth0" netns="" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.583 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.583 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.609 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.609 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.609 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.613 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.614 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.616 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.621732 containerd[1508]: 2025-09-13 00:08:36.619 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.622596 containerd[1508]: time="2025-09-13T00:08:36.622553696Z" level=info msg="TearDown network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" successfully" Sep 13 00:08:36.622637 containerd[1508]: time="2025-09-13T00:08:36.622593150Z" level=info msg="StopPodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" returns successfully" Sep 13 00:08:36.624186 containerd[1508]: time="2025-09-13T00:08:36.623394673Z" level=info msg="RemovePodSandbox for \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" Sep 13 00:08:36.624186 containerd[1508]: time="2025-09-13T00:08:36.623473842Z" level=info msg="Forcibly stopping sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\"" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.653 [WARNING][5581] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"df321835-3377-45e7-8908-644fb093834e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"b78e2943be75995d3f0337fde2c252e47c2fdb7ba1ec472e9dcd03b5a3f59ded", Pod:"goldmane-7988f88666-5x8gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9a2f0dfe017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.653 [INFO][5581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.653 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" iface="eth0" netns="" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.653 [INFO][5581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.653 [INFO][5581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.678 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.678 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.678 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.683 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.683 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" HandleID="k8s-pod-network.45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Workload="ci--4081--3--5--n--8d584fda4c-k8s-goldmane--7988f88666--5x8gx-eth0" Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.684 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.689190 containerd[1508]: 2025-09-13 00:08:36.686 [INFO][5581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815" Sep 13 00:08:36.689190 containerd[1508]: time="2025-09-13T00:08:36.688796342Z" level=info msg="TearDown network for sandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" successfully" Sep 13 00:08:36.698381 containerd[1508]: time="2025-09-13T00:08:36.698348850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:36.698450 containerd[1508]: time="2025-09-13T00:08:36.698411337Z" level=info msg="RemovePodSandbox \"45209ed777c0c69184542d0a73def866fbfdfcfc92bf3a9c06641799bb5d1815\" returns successfully" Sep 13 00:08:36.698864 containerd[1508]: time="2025-09-13T00:08:36.698842055Z" level=info msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.736 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09981167-486a-4b04-a96e-d7e3bf65b3c6", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251", Pod:"coredns-7c65d6cfc9-hn74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe6be658ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.737 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.737 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" iface="eth0" netns="" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.737 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.737 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.761 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.761 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.761 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.771 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.771 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.775 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.778655 containerd[1508]: 2025-09-13 00:08:36.776 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.779753 containerd[1508]: time="2025-09-13T00:08:36.778803780Z" level=info msg="TearDown network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" successfully" Sep 13 00:08:36.779753 containerd[1508]: time="2025-09-13T00:08:36.779171249Z" level=info msg="StopPodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" returns successfully" Sep 13 00:08:36.780276 containerd[1508]: time="2025-09-13T00:08:36.780084611Z" level=info msg="RemovePodSandbox for \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" Sep 13 00:08:36.780276 containerd[1508]: time="2025-09-13T00:08:36.780117763Z" level=info msg="Forcibly stopping sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\"" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.849 [WARNING][5624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"09981167-486a-4b04-a96e-d7e3bf65b3c6", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"eb2a8d9dffd3d29cf867957a013639027effee1bba8cbdbd1b1376fbda0fa251", Pod:"coredns-7c65d6cfc9-hn74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe6be658ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.850 [INFO][5624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.850 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" iface="eth0" netns="" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.850 [INFO][5624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.850 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.885 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.885 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.885 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.890 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.890 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" HandleID="k8s-pod-network.6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Workload="ci--4081--3--5--n--8d584fda4c-k8s-coredns--7c65d6cfc9--hn74t-eth0" Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.892 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.898574 containerd[1508]: 2025-09-13 00:08:36.895 [INFO][5624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32" Sep 13 00:08:36.900494 containerd[1508]: time="2025-09-13T00:08:36.898578473Z" level=info msg="TearDown network for sandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" successfully" Sep 13 00:08:36.902320 containerd[1508]: time="2025-09-13T00:08:36.902288739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:36.902365 containerd[1508]: time="2025-09-13T00:08:36.902355723Z" level=info msg="RemovePodSandbox \"6b9f9c7f97100a7eb10be9d59716a7fc6b40819ac4aa7a60ead89639bca8bc32\" returns successfully" Sep 13 00:08:36.902801 containerd[1508]: time="2025-09-13T00:08:36.902776432Z" level=info msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.939 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b37eaa3-aff0-44f7-9179-93bbfd3008e3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1", Pod:"calico-apiserver-7b6ffd985b-5hsfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae665c68b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.939 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.939 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" iface="eth0" netns="" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.939 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.939 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.968 [INFO][5652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.968 [INFO][5652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.970 [INFO][5652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.976 [WARNING][5652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.976 [INFO][5652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.978 [INFO][5652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.981848 containerd[1508]: 2025-09-13 00:08:36.980 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:36.983357 containerd[1508]: time="2025-09-13T00:08:36.981992980Z" level=info msg="TearDown network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" successfully" Sep 13 00:08:36.983357 containerd[1508]: time="2025-09-13T00:08:36.982015793Z" level=info msg="StopPodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" returns successfully" Sep 13 00:08:36.983357 containerd[1508]: time="2025-09-13T00:08:36.982536449Z" level=info msg="RemovePodSandbox for \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" Sep 13 00:08:36.983357 containerd[1508]: time="2025-09-13T00:08:36.982560134Z" level=info msg="Forcibly stopping sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\"" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.014 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b37eaa3-aff0-44f7-9179-93bbfd3008e3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"9f75dd8262f69e3e8fd1fa455a98983c13a4233038ae6a3026dbeff909a117c1", Pod:"calico-apiserver-7b6ffd985b-5hsfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae665c68b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.014 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.014 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" iface="eth0" netns="" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.014 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.014 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.032 [INFO][5673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.032 [INFO][5673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.032 [INFO][5673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.037 [WARNING][5673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.037 [INFO][5673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" HandleID="k8s-pod-network.2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--5hsfg-eth0" Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.038 [INFO][5673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.042079 containerd[1508]: 2025-09-13 00:08:37.040 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e" Sep 13 00:08:37.043515 containerd[1508]: time="2025-09-13T00:08:37.042092932Z" level=info msg="TearDown network for sandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" successfully" Sep 13 00:08:37.045399 containerd[1508]: time="2025-09-13T00:08:37.045351862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:37.045399 containerd[1508]: time="2025-09-13T00:08:37.045422654Z" level=info msg="RemovePodSandbox \"2d8878237a9cee8e9228639d69c151a023968c18739ff454861497b0f3d8e20e\" returns successfully" Sep 13 00:08:37.045987 containerd[1508]: time="2025-09-13T00:08:37.045961654Z" level=info msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.078 [WARNING][5687] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c063e060-6c08-4a15-9d6a-d83f8c71099c", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89", Pod:"calico-apiserver-7b6ffd985b-9cv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a78ba538d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.078 [INFO][5687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.078 [INFO][5687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" iface="eth0" netns="" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.078 [INFO][5687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.078 [INFO][5687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.102 [INFO][5696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.102 [INFO][5696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.103 [INFO][5696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.107 [WARNING][5696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.107 [INFO][5696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.108 [INFO][5696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.112557 containerd[1508]: 2025-09-13 00:08:37.110 [INFO][5687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.112969 containerd[1508]: time="2025-09-13T00:08:37.112601206Z" level=info msg="TearDown network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" successfully" Sep 13 00:08:37.112969 containerd[1508]: time="2025-09-13T00:08:37.112623909Z" level=info msg="StopPodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" returns successfully" Sep 13 00:08:37.113401 containerd[1508]: time="2025-09-13T00:08:37.113370438Z" level=info msg="RemovePodSandbox for \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" Sep 13 00:08:37.113446 containerd[1508]: time="2025-09-13T00:08:37.113401026Z" level=info msg="Forcibly stopping sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\"" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.140 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0", GenerateName:"calico-apiserver-7b6ffd985b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c063e060-6c08-4a15-9d6a-d83f8c71099c", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6ffd985b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-8d584fda4c", ContainerID:"2a0351ab526b9084c3dbc6967e9907bcb8e2d84469234d000df8cce039f5ff89", Pod:"calico-apiserver-7b6ffd985b-9cv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a78ba538d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.140 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.140 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" iface="eth0" netns="" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.140 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.140 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.160 [INFO][5719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.161 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.161 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.167 [WARNING][5719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.167 [INFO][5719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" HandleID="k8s-pod-network.d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Workload="ci--4081--3--5--n--8d584fda4c-k8s-calico--apiserver--7b6ffd985b--9cv9t-eth0" Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.169 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.173846 containerd[1508]: 2025-09-13 00:08:37.171 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0" Sep 13 00:08:37.173846 containerd[1508]: time="2025-09-13T00:08:37.173798614Z" level=info msg="TearDown network for sandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" successfully" Sep 13 00:08:37.177803 containerd[1508]: time="2025-09-13T00:08:37.177624497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:37.177803 containerd[1508]: time="2025-09-13T00:08:37.177712352Z" level=info msg="RemovePodSandbox \"d5499a6d590e2fc95ef61dbf99e8417c81313d06d3b311d13b5bf4cb5c4af5a0\" returns successfully" Sep 13 00:08:38.483980 systemd[1]: run-containerd-runc-k8s.io-17c7d5f6c36ebf7a933703c1f0e1ccc5e4f9e14989854c580018465aed09079e-runc.zGpr3T.mount: Deactivated successfully. Sep 13 00:08:39.300869 systemd[1]: Started sshd@8-65.108.146.26:22-210.231.185.88:57692.service - OpenSSH per-connection server daemon (210.231.185.88:57692). Sep 13 00:08:39.413428 kubelet[2564]: I0913 00:08:39.413383 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:39.549930 kubelet[2564]: I0913 00:08:39.532738 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cs8pq" podStartSLOduration=35.575143152 podStartE2EDuration="47.527626697s" podCreationTimestamp="2025-09-13 00:07:52 +0000 UTC" firstStartedPulling="2025-09-13 00:08:14.438819769 +0000 UTC m=+39.743797791" lastFinishedPulling="2025-09-13 00:08:26.391303314 +0000 UTC m=+51.696281336" observedRunningTime="2025-09-13 00:08:27.318283872 +0000 UTC m=+52.623261894" watchObservedRunningTime="2025-09-13 00:08:39.527626697 +0000 UTC m=+64.832604720" Sep 13 00:08:41.086185 sshd[5768]: Received disconnect from 210.231.185.88 port 57692:11: Bye Bye [preauth] Sep 13 00:08:41.086185 sshd[5768]: Disconnected from authenticating user root 210.231.185.88 port 57692 [preauth] Sep 13 00:08:41.089842 systemd[1]: sshd@8-65.108.146.26:22-210.231.185.88:57692.service: Deactivated successfully. Sep 13 00:08:59.216139 systemd[1]: run-containerd-runc-k8s.io-860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1-runc.7qWcu0.mount: Deactivated successfully. Sep 13 00:09:10.017276 systemd[1]: Started sshd@9-65.108.146.26:22-147.75.109.163:46698.service - OpenSSH per-connection server daemon (147.75.109.163:46698). Sep 13 00:09:11.051371 sshd[5879]: Accepted publickey for core from 147.75.109.163 port 46698 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:11.054300 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:11.063822 systemd-logind[1477]: New session 8 of user core. Sep 13 00:09:11.068363 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:09:12.359185 sshd[5879]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:12.361555 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:09:12.363979 systemd[1]: sshd@9-65.108.146.26:22-147.75.109.163:46698.service: Deactivated successfully. Sep 13 00:09:12.368025 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:09:12.369565 systemd-logind[1477]: Removed session 8. Sep 13 00:09:17.565506 systemd[1]: Started sshd@10-65.108.146.26:22-147.75.109.163:49976.service - OpenSSH per-connection server daemon (147.75.109.163:49976). Sep 13 00:09:18.670695 sshd[5915]: Accepted publickey for core from 147.75.109.163 port 49976 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:18.672248 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:18.679505 systemd-logind[1477]: New session 9 of user core. Sep 13 00:09:18.684681 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:09:19.648830 sshd[5915]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:19.653075 systemd[1]: sshd@10-65.108.146.26:22-147.75.109.163:49976.service: Deactivated successfully. Sep 13 00:09:19.655024 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:09:19.658165 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:09:19.659422 systemd-logind[1477]: Removed session 9. Sep 13 00:09:24.841434 systemd[1]: Started sshd@11-65.108.146.26:22-147.75.109.163:34136.service - OpenSSH per-connection server daemon (147.75.109.163:34136). Sep 13 00:09:25.976045 sshd[5956]: Accepted publickey for core from 147.75.109.163 port 34136 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:25.977183 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:25.983755 systemd-logind[1477]: New session 10 of user core. Sep 13 00:09:25.988288 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:09:26.852921 sshd[5956]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:26.856585 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:09:26.857526 systemd[1]: sshd@11-65.108.146.26:22-147.75.109.163:34136.service: Deactivated successfully. Sep 13 00:09:26.859635 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:09:26.861086 systemd-logind[1477]: Removed session 10. Sep 13 00:09:27.000776 systemd[1]: Started sshd@12-65.108.146.26:22-147.75.109.163:34144.service - OpenSSH per-connection server daemon (147.75.109.163:34144). Sep 13 00:09:27.970514 sshd[5970]: Accepted publickey for core from 147.75.109.163 port 34144 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:27.972436 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:27.976836 systemd-logind[1477]: New session 11 of user core. Sep 13 00:09:27.984280 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:09:28.777880 sshd[5970]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:28.782095 systemd[1]: sshd@12-65.108.146.26:22-147.75.109.163:34144.service: Deactivated successfully. Sep 13 00:09:28.784570 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:09:28.785731 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:09:28.786890 systemd-logind[1477]: Removed session 11. Sep 13 00:09:28.950057 systemd[1]: Started sshd@13-65.108.146.26:22-147.75.109.163:34150.service - OpenSSH per-connection server daemon (147.75.109.163:34150). Sep 13 00:09:29.928331 sshd[5987]: Accepted publickey for core from 147.75.109.163 port 34150 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:29.929793 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:29.935086 systemd-logind[1477]: New session 12 of user core. Sep 13 00:09:29.938322 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:09:30.687332 sshd[5987]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:30.691490 systemd[1]: sshd@13-65.108.146.26:22-147.75.109.163:34150.service: Deactivated successfully. Sep 13 00:09:30.693620 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:09:30.694587 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:09:30.695840 systemd-logind[1477]: Removed session 12. Sep 13 00:09:35.867519 systemd[1]: Started sshd@14-65.108.146.26:22-147.75.109.163:54112.service - OpenSSH per-connection server daemon (147.75.109.163:54112). Sep 13 00:09:36.851658 sshd[6003]: Accepted publickey for core from 147.75.109.163 port 54112 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:36.852922 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:36.857036 systemd-logind[1477]: New session 13 of user core. Sep 13 00:09:36.865297 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:09:37.609804 sshd[6003]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:37.612447 systemd[1]: sshd@14-65.108.146.26:22-147.75.109.163:54112.service: Deactivated successfully. Sep 13 00:09:37.614115 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:09:37.615572 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:09:37.616604 systemd-logind[1477]: Removed session 13. Sep 13 00:09:37.817235 systemd[1]: Started sshd@15-65.108.146.26:22-147.75.109.163:54120.service - OpenSSH per-connection server daemon (147.75.109.163:54120). Sep 13 00:09:38.909458 sshd[6018]: Accepted publickey for core from 147.75.109.163 port 54120 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:38.911705 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:38.917251 systemd-logind[1477]: New session 14 of user core. Sep 13 00:09:38.921306 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:09:39.899683 sshd[6018]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:39.904999 systemd[1]: sshd@15-65.108.146.26:22-147.75.109.163:54120.service: Deactivated successfully. Sep 13 00:09:39.906807 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:09:39.909023 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:09:39.910633 systemd-logind[1477]: Removed session 14. Sep 13 00:09:40.088782 systemd[1]: Started sshd@16-65.108.146.26:22-147.75.109.163:54124.service - OpenSSH per-connection server daemon (147.75.109.163:54124). Sep 13 00:09:41.191050 sshd[6067]: Accepted publickey for core from 147.75.109.163 port 54124 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:41.193248 sshd[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:41.200010 systemd-logind[1477]: New session 15 of user core. Sep 13 00:09:41.204362 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:09:43.904708 sshd[6067]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:43.926892 systemd[1]: sshd@16-65.108.146.26:22-147.75.109.163:54124.service: Deactivated successfully. Sep 13 00:09:43.928923 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:09:43.930964 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:09:43.934760 systemd-logind[1477]: Removed session 15. Sep 13 00:09:44.052607 systemd[1]: Started sshd@17-65.108.146.26:22-147.75.109.163:47356.service - OpenSSH per-connection server daemon (147.75.109.163:47356). Sep 13 00:09:45.076162 sshd[6110]: Accepted publickey for core from 147.75.109.163 port 47356 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:45.076529 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:45.084938 systemd-logind[1477]: New session 16 of user core. Sep 13 00:09:45.090348 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:09:46.601424 sshd[6110]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:46.606920 systemd[1]: sshd@17-65.108.146.26:22-147.75.109.163:47356.service: Deactivated successfully. Sep 13 00:09:46.614424 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:09:46.615693 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:09:46.616992 systemd-logind[1477]: Removed session 16. Sep 13 00:09:46.770546 systemd[1]: Started sshd@18-65.108.146.26:22-147.75.109.163:47358.service - OpenSSH per-connection server daemon (147.75.109.163:47358). Sep 13 00:09:47.881181 sshd[6123]: Accepted publickey for core from 147.75.109.163 port 47358 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:47.886996 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:47.899210 systemd-logind[1477]: New session 17 of user core. Sep 13 00:09:47.910497 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:09:49.171761 sshd[6123]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:49.175501 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:09:49.176044 systemd[1]: sshd@18-65.108.146.26:22-147.75.109.163:47358.service: Deactivated successfully. Sep 13 00:09:49.177906 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:09:49.178712 systemd-logind[1477]: Removed session 17. Sep 13 00:09:54.389576 systemd[1]: Started sshd@19-65.108.146.26:22-147.75.109.163:46644.service - OpenSSH per-connection server daemon (147.75.109.163:46644). Sep 13 00:09:55.409802 sshd[6181]: Accepted publickey for core from 147.75.109.163 port 46644 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:55.411578 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:55.418613 systemd-logind[1477]: New session 18 of user core. Sep 13 00:09:55.422639 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:09:55.581189 systemd[1]: Started sshd@20-65.108.146.26:22-210.231.185.88:20452.service - OpenSSH per-connection server daemon (210.231.185.88:20452). Sep 13 00:09:56.898338 sshd[6181]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:56.905888 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:09:56.906529 systemd[1]: sshd@19-65.108.146.26:22-147.75.109.163:46644.service: Deactivated successfully. Sep 13 00:09:56.909501 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:09:56.916539 systemd-logind[1477]: Removed session 18. Sep 13 00:09:57.062671 sshd[6185]: Invalid user usertest from 210.231.185.88 port 20452 Sep 13 00:09:57.339584 sshd[6185]: Received disconnect from 210.231.185.88 port 20452:11: Bye Bye [preauth] Sep 13 00:09:57.339584 sshd[6185]: Disconnected from invalid user usertest 210.231.185.88 port 20452 [preauth] Sep 13 00:09:57.344034 systemd[1]: sshd@20-65.108.146.26:22-210.231.185.88:20452.service: Deactivated successfully. Sep 13 00:10:02.072481 systemd[1]: Started sshd@21-65.108.146.26:22-147.75.109.163:58742.service - OpenSSH per-connection server daemon (147.75.109.163:58742). Sep 13 00:10:03.097004 sshd[6218]: Accepted publickey for core from 147.75.109.163 port 58742 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:10:03.098556 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:03.103361 systemd-logind[1477]: New session 19 of user core. Sep 13 00:10:03.108400 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:10:04.264194 sshd[6218]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:04.274954 systemd[1]: sshd@21-65.108.146.26:22-147.75.109.163:58742.service: Deactivated successfully. Sep 13 00:10:04.279042 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:10:04.280173 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:10:04.281583 systemd-logind[1477]: Removed session 19. Sep 13 00:10:09.201204 systemd[1]: run-containerd-runc-k8s.io-860fbc1f7b1be5ee384bb1b27668ebf44d4ce43022fa4bbb4e91c22ae0b95ab1-runc.uCvLCz.mount: Deactivated successfully. Sep 13 00:10:19.267170 systemd[1]: cri-containerd-d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002.scope: Deactivated successfully. Sep 13 00:10:19.267396 systemd[1]: cri-containerd-d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002.scope: Consumed 3.113s CPU time, 17.7M memory peak, 0B memory swap peak. Sep 13 00:10:19.465841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002-rootfs.mount: Deactivated successfully. Sep 13 00:10:19.555532 systemd[1]: cri-containerd-2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e.scope: Deactivated successfully. Sep 13 00:10:19.556291 systemd[1]: cri-containerd-2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e.scope: Consumed 14.341s CPU time. Sep 13 00:10:19.558631 containerd[1508]: time="2025-09-13T00:10:19.486776244Z" level=info msg="shim disconnected" id=d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002 namespace=k8s.io Sep 13 00:10:19.558631 containerd[1508]: time="2025-09-13T00:10:19.558595727Z" level=warning msg="cleaning up after shim disconnected" id=d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002 namespace=k8s.io Sep 13 00:10:19.561564 containerd[1508]: time="2025-09-13T00:10:19.558638187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:19.595091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e-rootfs.mount: Deactivated successfully. Sep 13 00:10:19.596809 containerd[1508]: time="2025-09-13T00:10:19.596761967Z" level=info msg="shim disconnected" id=2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e namespace=k8s.io Sep 13 00:10:19.597115 containerd[1508]: time="2025-09-13T00:10:19.596886310Z" level=warning msg="cleaning up after shim disconnected" id=2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e namespace=k8s.io Sep 13 00:10:19.597115 containerd[1508]: time="2025-09-13T00:10:19.596898673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:19.769482 kubelet[2564]: E0913 00:10:19.765752 2564 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35848->10.0.0.2:2379: read: connection timed out" Sep 13 00:10:20.142013 kubelet[2564]: I0913 00:10:20.136779 2564 scope.go:117] "RemoveContainer" containerID="d3ff658f7804440907394c2b80de5b3dc42d72fd243dc9d93fc90daa26b01002" Sep 13 00:10:20.151626 kubelet[2564]: I0913 00:10:20.150529 2564 scope.go:117] "RemoveContainer" containerID="2622369af7297bba1e41a4b708931439407290a22927c73771aabe79cf4c932e" Sep 13 00:10:20.197548 containerd[1508]: time="2025-09-13T00:10:20.197095623Z" level=info msg="CreateContainer within sandbox \"2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:10:20.197548 containerd[1508]: time="2025-09-13T00:10:20.197230325Z" level=info msg="CreateContainer within sandbox \"cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:10:20.270810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347102705.mount: Deactivated successfully. Sep 13 00:10:20.277595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount982464386.mount: Deactivated successfully. Sep 13 00:10:20.284783 containerd[1508]: time="2025-09-13T00:10:20.284741791Z" level=info msg="CreateContainer within sandbox \"2b7ac79d1941e8a3ca52b11864085733f33ec90b76336308826634e71aa72ff1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1b602984d45f2f74408c2be46bcfe6688390cb81341ce93d50dfc0abb492dd86\"" Sep 13 00:10:20.286089 containerd[1508]: time="2025-09-13T00:10:20.285263067Z" level=info msg="StartContainer for \"1b602984d45f2f74408c2be46bcfe6688390cb81341ce93d50dfc0abb492dd86\"" Sep 13 00:10:20.302287 containerd[1508]: time="2025-09-13T00:10:20.300998700Z" level=info msg="CreateContainer within sandbox \"cfe8d6f37bae79c6c68d9b0144f641f9d9440edd50b8669fc42af7fd7e2f6f75\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b979b0ffe8d7828ffb940ceef286d02e9abda4dc187e1605718517664e7e3dbd\"" Sep 13 00:10:20.302607 containerd[1508]: time="2025-09-13T00:10:20.302584201Z" level=info msg="StartContainer for \"b979b0ffe8d7828ffb940ceef286d02e9abda4dc187e1605718517664e7e3dbd\"" Sep 13 00:10:20.328253 systemd[1]: Started cri-containerd-1b602984d45f2f74408c2be46bcfe6688390cb81341ce93d50dfc0abb492dd86.scope - libcontainer container 1b602984d45f2f74408c2be46bcfe6688390cb81341ce93d50dfc0abb492dd86. Sep 13 00:10:20.329466 systemd[1]: Started cri-containerd-b979b0ffe8d7828ffb940ceef286d02e9abda4dc187e1605718517664e7e3dbd.scope - libcontainer container b979b0ffe8d7828ffb940ceef286d02e9abda4dc187e1605718517664e7e3dbd. Sep 13 00:10:20.366672 containerd[1508]: time="2025-09-13T00:10:20.366632946Z" level=info msg="StartContainer for \"b979b0ffe8d7828ffb940ceef286d02e9abda4dc187e1605718517664e7e3dbd\" returns successfully" Sep 13 00:10:20.375055 containerd[1508]: time="2025-09-13T00:10:20.375027875Z" level=info msg="StartContainer for \"1b602984d45f2f74408c2be46bcfe6688390cb81341ce93d50dfc0abb492dd86\" returns successfully" Sep 13 00:10:24.869719 kubelet[2564]: E0913 00:10:24.824888 2564 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35616->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-8d584fda4c.1864af0a85e5cfe0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-8d584fda4c,UID:9c51f6e080fb54331b24ebb35d9986ba,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-8d584fda4c,},FirstTimestamp:2025-09-13 00:10:14.312300512 +0000 UTC m=+159.617278575,LastTimestamp:2025-09-13 00:10:14.312300512 +0000 UTC m=+159.617278575,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-8d584fda4c,}"