Sep 12 17:33:04.899012 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:33:04.899036 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:04.899044 kernel: BIOS-provided physical RAM map: Sep 12 17:33:04.899050 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:33:04.899056 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:33:04.899062 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:33:04.899069 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 12 17:33:04.899074 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 12 17:33:04.899082 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:33:04.899087 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:33:04.899093 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:33:04.899099 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:33:04.899105 kernel: NX (Execute Disable) protection: active Sep 12 17:33:04.899111 kernel: APIC: Static calls initialized Sep 12 17:33:04.899157 kernel: SMBIOS 2.8 present. Sep 12 17:33:04.899164 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 12 17:33:04.899170 kernel: Hypervisor detected: KVM Sep 12 17:33:04.899177 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:33:04.899183 kernel: kvm-clock: using sched offset of 2972085350 cycles Sep 12 17:33:04.899190 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:33:04.899197 kernel: tsc: Detected 2495.312 MHz processor Sep 12 17:33:04.899203 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:33:04.899210 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:33:04.899218 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 12 17:33:04.899225 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:33:04.899232 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:33:04.899238 kernel: Using GB pages for direct mapping Sep 12 17:33:04.899245 kernel: ACPI: Early table checksum verification disabled Sep 12 17:33:04.899251 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 12 17:33:04.899258 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899264 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899271 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899279 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 12 17:33:04.899285 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899292 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899298 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899305 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:04.899311 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 12 17:33:04.899318 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 12 17:33:04.899324 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 12 17:33:04.899334 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 12 17:33:04.899341 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 12 17:33:04.899348 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 12 17:33:04.899355 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 12 17:33:04.899361 kernel: No NUMA configuration found Sep 12 17:33:04.899368 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 12 17:33:04.899377 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 12 17:33:04.899386 kernel: Zone ranges: Sep 12 17:33:04.899395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:33:04.899405 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 12 17:33:04.899414 kernel: Normal empty Sep 12 17:33:04.899423 kernel: Movable zone start for each node Sep 12 17:33:04.899430 kernel: Early memory node ranges Sep 12 17:33:04.899437 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:33:04.899444 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 12 17:33:04.899451 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 12 17:33:04.899460 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:33:04.899466 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:33:04.899473 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:33:04.899480 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:33:04.899487 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:33:04.899493 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:33:04.899500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:33:04.899507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:33:04.899514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:33:04.899522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:33:04.899529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:33:04.899535 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:33:04.899542 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:33:04.899549 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:33:04.899556 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:33:04.899563 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:33:04.899569 kernel: Booting paravirtualized kernel on KVM Sep 12 17:33:04.899584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:33:04.899593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:33:04.899600 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:33:04.899606 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:33:04.899613 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:33:04.899620 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:33:04.899628 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:04.899635 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:33:04.899642 kernel: random: crng init done Sep 12 17:33:04.899651 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:33:04.899658 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:33:04.899666 kernel: Fallback order for Node 0: 0 Sep 12 17:33:04.899673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 12 17:33:04.899680 kernel: Policy zone: DMA32 Sep 12 17:33:04.899687 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:33:04.899694 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:33:04.899701 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:33:04.899708 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:33:04.899716 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:33:04.899722 kernel: Dynamic Preempt: voluntary Sep 12 17:33:04.899729 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:33:04.899737 kernel: rcu: RCU event tracing is enabled. Sep 12 17:33:04.899744 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:33:04.899751 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:33:04.899758 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:33:04.899765 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:33:04.899772 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:33:04.899778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:33:04.899787 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:33:04.899793 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:33:04.899803 kernel: Console: colour VGA+ 80x25 Sep 12 17:33:04.899812 kernel: printk: console [tty0] enabled Sep 12 17:33:04.899819 kernel: printk: console [ttyS0] enabled Sep 12 17:33:04.899826 kernel: ACPI: Core revision 20230628 Sep 12 17:33:04.899833 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:33:04.899840 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:33:04.899847 kernel: x2apic enabled Sep 12 17:33:04.899855 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:33:04.899862 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:33:04.899869 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:33:04.899876 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Sep 12 17:33:04.899883 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:33:04.899890 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:33:04.899896 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:33:04.899903 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:33:04.899917 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:33:04.899924 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:33:04.899931 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:33:04.899940 kernel: active return thunk: retbleed_return_thunk Sep 12 17:33:04.899947 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:33:04.899954 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:33:04.899961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:33:04.899968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:33:04.899976 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:33:04.899984 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:33:04.899991 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:33:04.899998 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:33:04.900006 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:33:04.900013 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:33:04.900020 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:33:04.900027 kernel: landlock: Up and running. Sep 12 17:33:04.900034 kernel: SELinux: Initializing. Sep 12 17:33:04.900043 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:33:04.900050 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:33:04.900057 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:33:04.900064 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:04.900072 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:04.900079 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:04.900086 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:33:04.900093 kernel: ... version: 0 Sep 12 17:33:04.900100 kernel: ... bit width: 48 Sep 12 17:33:04.900109 kernel: ... generic registers: 6 Sep 12 17:33:04.900130 kernel: ... value mask: 0000ffffffffffff Sep 12 17:33:04.900141 kernel: ... max period: 00007fffffffffff Sep 12 17:33:04.900166 kernel: ... fixed-purpose events: 0 Sep 12 17:33:04.900188 kernel: ... event mask: 000000000000003f Sep 12 17:33:04.900209 kernel: signal: max sigframe size: 1776 Sep 12 17:33:04.900220 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:33:04.900227 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:33:04.900234 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:33:04.900243 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:33:04.900250 kernel: .... node #0, CPUs: #1 Sep 12 17:33:04.900257 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:33:04.900264 kernel: smpboot: Max logical packages: 1 Sep 12 17:33:04.900272 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Sep 12 17:33:04.900279 kernel: devtmpfs: initialized Sep 12 17:33:04.900286 kernel: x86/mm: Memory block size: 128MB Sep 12 17:33:04.900293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:33:04.900300 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:33:04.900309 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:33:04.900316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:33:04.900324 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:33:04.900334 kernel: audit: type=2000 audit(1757698383.705:1): state=initialized audit_enabled=0 res=1 Sep 12 17:33:04.900344 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:33:04.900354 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:33:04.900363 kernel: cpuidle: using governor menu Sep 12 17:33:04.900370 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:33:04.900390 kernel: dca service started, version 1.12.1 Sep 12 17:33:04.900409 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:33:04.900416 kernel: PCI: Using configuration type 1 for base access Sep 12 17:33:04.900424 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:33:04.900431 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:33:04.900438 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:33:04.900445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:33:04.900452 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:33:04.900460 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:33:04.900467 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:33:04.900476 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:33:04.900483 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:33:04.900490 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:33:04.900497 kernel: ACPI: Interpreter enabled Sep 12 17:33:04.900504 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:33:04.900511 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:33:04.900518 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:33:04.900525 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:33:04.900532 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:33:04.900541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:33:04.900683 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:33:04.900763 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:33:04.900838 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:33:04.900848 kernel: PCI host bridge to bus 0000:00 Sep 12 17:33:04.900924 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:33:04.900992 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:33:04.901072 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:33:04.903218 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 12 17:33:04.903318 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:33:04.903408 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:33:04.903498 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:33:04.903626 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:33:04.903756 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:33:04.903859 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 12 17:33:04.903967 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 12 17:33:04.904069 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 12 17:33:04.905349 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 12 17:33:04.905439 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:33:04.905546 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.905647 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 12 17:33:04.905744 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.905838 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 12 17:33:04.905922 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.905999 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 12 17:33:04.906079 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.906288 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 12 17:33:04.906400 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.906488 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 12 17:33:04.906572 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.906668 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 12 17:33:04.906765 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.906847 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 12 17:33:04.906941 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.907043 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 12 17:33:04.907201 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:33:04.907298 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 12 17:33:04.907387 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:33:04.907481 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:33:04.907597 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:33:04.907695 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 12 17:33:04.907786 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 12 17:33:04.907867 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:33:04.907942 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:33:04.908046 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:33:04.909257 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 12 17:33:04.909356 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 12 17:33:04.909435 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 12 17:33:04.909509 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:33:04.909599 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 17:33:04.909679 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:33:04.909777 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 17:33:04.909876 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 12 17:33:04.909969 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:33:04.910064 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 17:33:04.910168 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:33:04.910254 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 17:33:04.910331 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 12 17:33:04.910426 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 12 17:33:04.910522 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:33:04.910623 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 17:33:04.910713 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:33:04.910816 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 17:33:04.910913 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 12 17:33:04.911013 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:33:04.911107 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 17:33:04.912228 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:33:04.912324 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 17:33:04.912420 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 12 17:33:04.912523 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:33:04.912615 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 17:33:04.912690 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:33:04.912797 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 17:33:04.912884 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 12 17:33:04.912972 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 12 17:33:04.913056 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:33:04.913153 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 17:33:04.913238 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:33:04.913248 kernel: acpiphp: Slot [0] registered Sep 12 17:33:04.913360 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:33:04.913459 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 12 17:33:04.913540 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 12 17:33:04.913629 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 12 17:33:04.913724 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:33:04.913814 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 17:33:04.913895 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:33:04.913906 kernel: acpiphp: Slot [0-2] registered Sep 12 17:33:04.913980 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:33:04.914059 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 12 17:33:04.915190 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:33:04.915204 kernel: acpiphp: Slot [0-3] registered Sep 12 17:33:04.915297 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:33:04.915374 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 17:33:04.915447 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:33:04.915459 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:33:04.915466 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:33:04.915477 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:33:04.915485 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:33:04.915492 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:33:04.915499 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:33:04.915507 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:33:04.915514 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:33:04.915522 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:33:04.915529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:33:04.915536 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:33:04.915545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:33:04.915553 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:33:04.915560 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:33:04.915567 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:33:04.915586 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:33:04.915594 kernel: iommu: Default domain type: Translated Sep 12 17:33:04.915601 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:33:04.915609 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:33:04.915616 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:33:04.915625 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:33:04.915632 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 12 17:33:04.915709 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:33:04.915783 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:33:04.915855 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:33:04.915865 kernel: vgaarb: loaded Sep 12 17:33:04.915873 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:33:04.915880 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:33:04.915887 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:33:04.915898 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:33:04.915906 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:33:04.915913 kernel: pnp: PnP ACPI init Sep 12 17:33:04.915992 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:33:04.916004 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:33:04.916011 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:33:04.916019 kernel: NET: Registered PF_INET protocol family Sep 12 17:33:04.916026 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:33:04.916036 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:33:04.916044 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:33:04.916052 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:33:04.916060 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:33:04.916068 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:33:04.916075 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:33:04.916083 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:33:04.916090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:33:04.916098 kernel: NET: Registered PF_XDP protocol family Sep 12 17:33:04.916188 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:33:04.916263 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:33:04.916337 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:33:04.916411 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 17:33:04.916484 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 17:33:04.916556 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 17:33:04.916643 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:33:04.916723 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 17:33:04.916796 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:33:04.916870 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:33:04.916947 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 17:33:04.917028 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:33:04.917104 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:33:04.917756 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 17:33:04.917861 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:33:04.917942 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:33:04.918016 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 17:33:04.918098 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:33:04.918190 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:33:04.918265 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 17:33:04.918337 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:33:04.918410 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:33:04.918489 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 17:33:04.918587 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:33:04.918663 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:33:04.918737 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 12 17:33:04.918811 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 17:33:04.918884 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:33:04.918958 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:33:04.919031 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 12 17:33:04.919106 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 12 17:33:04.919198 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:33:04.919275 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:33:04.919355 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 12 17:33:04.919432 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 17:33:04.919506 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:33:04.919597 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:33:04.919666 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:33:04.919733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:33:04.919798 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 12 17:33:04.919864 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:33:04.919929 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:33:04.920013 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 12 17:33:04.920084 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:33:04.920237 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 12 17:33:04.920308 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:33:04.920383 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 12 17:33:04.920451 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:33:04.920532 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 12 17:33:04.920613 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:33:04.920687 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 12 17:33:04.920756 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:33:04.920834 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 12 17:33:04.920903 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:33:04.920982 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 12 17:33:04.921052 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 12 17:33:04.921159 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:33:04.921239 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 12 17:33:04.921311 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 12 17:33:04.921380 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:33:04.921453 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 12 17:33:04.921527 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 12 17:33:04.921608 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:33:04.921620 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:33:04.921628 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:33:04.921636 kernel: Initialise system trusted keyrings Sep 12 17:33:04.921644 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:33:04.921652 kernel: Key type asymmetric registered Sep 12 17:33:04.921659 kernel: Asymmetric key parser 'x509' registered Sep 12 17:33:04.921667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:33:04.921678 kernel: io scheduler mq-deadline registered Sep 12 17:33:04.921686 kernel: io scheduler kyber registered Sep 12 17:33:04.921693 kernel: io scheduler bfq registered Sep 12 17:33:04.921772 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 12 17:33:04.921847 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 12 17:33:04.921922 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 12 17:33:04.921999 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 12 17:33:04.922075 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 12 17:33:04.922184 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 12 17:33:04.922302 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 12 17:33:04.922414 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 12 17:33:04.922520 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 12 17:33:04.922624 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 12 17:33:04.922703 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 12 17:33:04.922781 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 12 17:33:04.922883 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 12 17:33:04.922976 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 12 17:33:04.923072 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 12 17:33:04.925210 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 12 17:33:04.925233 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:33:04.925331 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 12 17:33:04.925427 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 12 17:33:04.925439 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:33:04.925448 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 12 17:33:04.925455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:33:04.925466 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:33:04.925475 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:33:04.925484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:33:04.925495 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:33:04.925607 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:33:04.925680 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:33:04.925692 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 12 17:33:04.925758 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:33:04 UTC (1757698384) Sep 12 17:33:04.925829 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:33:04.925839 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:33:04.925850 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:33:04.925858 kernel: Segment Routing with IPv6 Sep 12 17:33:04.925866 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:33:04.925873 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:33:04.925881 kernel: Key type dns_resolver registered Sep 12 17:33:04.925889 kernel: IPI shorthand broadcast: enabled Sep 12 17:33:04.925898 kernel: sched_clock: Marking stable (1171006949, 149350823)->(1330124870, -9767098) Sep 12 17:33:04.925907 kernel: registered taskstats version 1 Sep 12 17:33:04.925915 kernel: Loading compiled-in X.509 certificates Sep 12 17:33:04.925923 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:33:04.925930 kernel: Key type .fscrypt registered Sep 12 17:33:04.925941 kernel: Key type fscrypt-provisioning registered Sep 12 17:33:04.925952 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:33:04.925961 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:33:04.925969 kernel: ima: No architecture policies found Sep 12 17:33:04.925978 kernel: clk: Disabling unused clocks Sep 12 17:33:04.925986 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:33:04.925994 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:33:04.926001 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:33:04.926009 kernel: Run /init as init process Sep 12 17:33:04.926017 kernel: with arguments: Sep 12 17:33:04.926026 kernel: /init Sep 12 17:33:04.926033 kernel: with environment: Sep 12 17:33:04.926041 kernel: HOME=/ Sep 12 17:33:04.926048 kernel: TERM=linux Sep 12 17:33:04.926058 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:33:04.926068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:33:04.926079 systemd[1]: Detected virtualization kvm. Sep 12 17:33:04.926088 systemd[1]: Detected architecture x86-64. Sep 12 17:33:04.926096 systemd[1]: Running in initrd. Sep 12 17:33:04.926104 systemd[1]: No hostname configured, using default hostname. Sep 12 17:33:04.926112 systemd[1]: Hostname set to . Sep 12 17:33:04.926209 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:33:04.926218 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:33:04.926229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:04.926241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:04.926253 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:33:04.926263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:33:04.926272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:33:04.926280 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:33:04.926293 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:33:04.926301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:33:04.926310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:04.926318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:04.926326 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:04.926334 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:33:04.926342 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:33:04.926352 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:04.926360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:33:04.926368 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:33:04.926377 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:33:04.926385 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:33:04.926393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:04.926401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:04.926409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:04.926419 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:04.926428 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:33:04.926436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:33:04.926444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:33:04.926453 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:33:04.926461 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:33:04.926469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:33:04.926478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:04.926486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:33:04.926517 systemd-journald[187]: Collecting audit messages is disabled. Sep 12 17:33:04.926538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:04.926546 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:33:04.926557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:33:04.926566 systemd-journald[187]: Journal started Sep 12 17:33:04.926595 systemd-journald[187]: Runtime Journal (/run/log/journal/232e8ed4127440329a5b5b2bb142dc04) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:33:04.900439 systemd-modules-load[188]: Inserted module 'overlay' Sep 12 17:33:04.970249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:33:04.970278 kernel: Bridge firewalling registered Sep 12 17:33:04.970292 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:33:04.937817 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 12 17:33:04.970901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:04.971791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:04.972910 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:33:04.979255 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:04.980886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:33:04.983244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:33:04.986318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:04.993052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:05.000777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:05.003249 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:33:05.008300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:05.017419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:05.020895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:05.031425 dracut-cmdline[218]: dracut-dracut-053 Sep 12 17:33:05.034716 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:05.053763 systemd-resolved[222]: Positive Trust Anchors: Sep 12 17:33:05.054261 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:05.054292 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:05.057677 systemd-resolved[222]: Defaulting to hostname 'linux'. Sep 12 17:33:05.058551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:05.064955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:05.102164 kernel: SCSI subsystem initialized Sep 12 17:33:05.112155 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:33:05.123155 kernel: iscsi: registered transport (tcp) Sep 12 17:33:05.143314 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:33:05.143403 kernel: QLogic iSCSI HBA Driver Sep 12 17:33:05.180730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:33:05.185308 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:33:05.211751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:33:05.211840 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:33:05.211851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:33:05.253156 kernel: raid6: avx2x4 gen() 27339 MB/s Sep 12 17:33:05.271162 kernel: raid6: avx2x2 gen() 28457 MB/s Sep 12 17:33:05.288416 kernel: raid6: avx2x1 gen() 24166 MB/s Sep 12 17:33:05.288496 kernel: raid6: using algorithm avx2x2 gen() 28457 MB/s Sep 12 17:33:05.308241 kernel: raid6: .... xor() 19740 MB/s, rmw enabled Sep 12 17:33:05.308327 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:33:05.329159 kernel: xor: automatically using best checksumming function avx Sep 12 17:33:05.481159 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:33:05.491150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:33:05.497260 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:05.510153 systemd-udevd[406]: Using default interface naming scheme 'v255'. Sep 12 17:33:05.514231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:05.522415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:33:05.537065 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Sep 12 17:33:05.565075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:33:05.570390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:33:05.623432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:05.630384 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:33:05.644966 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:33:05.648719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:33:05.650813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:05.651668 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:33:05.658468 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:33:05.676522 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:33:05.697149 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:33:05.702315 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 17:33:05.759136 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:33:05.773734 kernel: ACPI: bus type USB registered Sep 12 17:33:05.781164 kernel: usbcore: registered new interface driver usbfs Sep 12 17:33:05.781227 kernel: libata version 3.00 loaded. Sep 12 17:33:05.782992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:33:05.788706 kernel: usbcore: registered new interface driver hub Sep 12 17:33:05.788732 kernel: usbcore: registered new device driver usb Sep 12 17:33:05.783153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:05.784289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:05.790344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:05.790462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:05.791199 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:05.801288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:05.807262 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:33:05.807435 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:33:05.810605 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:33:05.810809 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:33:05.817162 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:33:05.817219 kernel: AES CTR mode by8 optimization enabled Sep 12 17:33:05.820436 kernel: scsi host1: ahci Sep 12 17:33:05.822137 kernel: scsi host2: ahci Sep 12 17:33:05.824194 kernel: scsi host3: ahci Sep 12 17:33:05.829525 kernel: scsi host4: ahci Sep 12 17:33:05.833136 kernel: scsi host5: ahci Sep 12 17:33:05.836218 kernel: scsi host6: ahci Sep 12 17:33:05.836385 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Sep 12 17:33:05.836403 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Sep 12 17:33:05.836421 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Sep 12 17:33:05.836435 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Sep 12 17:33:05.836447 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Sep 12 17:33:05.836459 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Sep 12 17:33:05.890313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:05.894237 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:05.909014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:06.147300 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:06.147382 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:06.147408 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:33:06.147417 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:06.149850 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:06.150129 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:06.151161 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:33:06.153236 kernel: ata1.00: applying bridge limits Sep 12 17:33:06.154240 kernel: ata1.00: configured for UDMA/100 Sep 12 17:33:06.155150 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:33:06.186868 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 12 17:33:06.187108 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 17:33:06.190014 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:33:06.191436 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:33:06.191629 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 17:33:06.191760 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 12 17:33:06.194384 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 17:33:06.194579 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:33:06.197146 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:33:06.201329 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:33:06.201357 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 17:33:06.201498 kernel: GPT:17805311 != 80003071 Sep 12 17:33:06.201511 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 17:33:06.201648 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:33:06.203132 kernel: hub 1-0:1.0: USB hub found Sep 12 17:33:06.203296 kernel: GPT:17805311 != 80003071 Sep 12 17:33:06.203750 kernel: hub 1-0:1.0: 4 ports detected Sep 12 17:33:06.203891 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:33:06.207156 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 17:33:06.207347 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:33:06.207362 kernel: hub 2-0:1.0: USB hub found Sep 12 17:33:06.211330 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:33:06.211497 kernel: hub 2-0:1.0: 4 ports detected Sep 12 17:33:06.233669 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:33:06.233867 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:33:06.250184 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:33:06.260137 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Sep 12 17:33:06.264144 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (454) Sep 12 17:33:06.271095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 17:33:06.278426 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 17:33:06.284507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:33:06.291615 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 17:33:06.293070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 17:33:06.298327 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:33:06.307604 disk-uuid[575]: Primary Header is updated. Sep 12 17:33:06.307604 disk-uuid[575]: Secondary Entries is updated. Sep 12 17:33:06.307604 disk-uuid[575]: Secondary Header is updated. Sep 12 17:33:06.311642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:33:06.446168 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 17:33:06.581153 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:33:06.587262 kernel: usbcore: registered new interface driver usbhid Sep 12 17:33:06.587318 kernel: usbhid: USB HID core driver Sep 12 17:33:06.592455 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 12 17:33:06.592484 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 17:33:07.320222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:33:07.320929 disk-uuid[577]: The operation has completed successfully. Sep 12 17:33:07.365224 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:33:07.365326 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:33:07.386552 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:33:07.389349 sh[597]: Success Sep 12 17:33:07.404242 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:33:07.441635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:33:07.443785 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:33:07.444441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:33:07.460733 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:33:07.460790 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:07.462741 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:33:07.465988 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:33:07.466009 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:33:07.475148 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:33:07.477059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:33:07.478041 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:33:07.484264 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:33:07.486463 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:33:07.500673 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:07.500733 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:07.500743 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:33:07.505621 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:33:07.505671 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:33:07.512490 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:33:07.514832 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:07.519409 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:33:07.527387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:33:07.578891 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:33:07.596364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:33:07.619279 ignition[709]: Ignition 2.19.0 Sep 12 17:33:07.619288 ignition[709]: Stage: fetch-offline Sep 12 17:33:07.621057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:33:07.619324 ignition[709]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:07.619332 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:07.619431 ignition[709]: parsed url from cmdline: "" Sep 12 17:33:07.619434 ignition[709]: no config URL provided Sep 12 17:33:07.619438 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:33:07.619444 ignition[709]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:33:07.619449 ignition[709]: failed to fetch config: resource requires networking Sep 12 17:33:07.619938 ignition[709]: Ignition finished successfully Sep 12 17:33:07.627827 systemd-networkd[778]: lo: Link UP Sep 12 17:33:07.627835 systemd-networkd[778]: lo: Gained carrier Sep 12 17:33:07.629615 systemd-networkd[778]: Enumeration completed Sep 12 17:33:07.629679 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:33:07.630548 systemd[1]: Reached target network.target - Network. Sep 12 17:33:07.630635 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:07.630638 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:07.631475 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:07.631478 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:07.632026 systemd-networkd[778]: eth0: Link UP Sep 12 17:33:07.632030 systemd-networkd[778]: eth0: Gained carrier Sep 12 17:33:07.632036 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:07.637378 systemd-networkd[778]: eth1: Link UP Sep 12 17:33:07.637382 systemd-networkd[778]: eth1: Gained carrier Sep 12 17:33:07.637391 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:07.640270 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:33:07.652889 ignition[786]: Ignition 2.19.0 Sep 12 17:33:07.652899 ignition[786]: Stage: fetch Sep 12 17:33:07.653108 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:07.653130 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:07.653210 ignition[786]: parsed url from cmdline: "" Sep 12 17:33:07.653212 ignition[786]: no config URL provided Sep 12 17:33:07.653217 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:33:07.653222 ignition[786]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:33:07.653239 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 17:33:07.653370 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 12 17:33:07.660189 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:33:07.699224 systemd-networkd[778]: eth0: DHCPv4 address 135.181.98.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:33:07.853569 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 12 17:33:07.859209 ignition[786]: GET result: OK Sep 12 17:33:07.859294 ignition[786]: parsing config with SHA512: b6efcc8f549edb39995264a1661b99f55f8296b47b9442e43c20c1d93915c379b3081e5c303df9adb422ea66d75045aa3ae883ae212d940e1cfdfa025244e26c Sep 12 17:33:07.862738 unknown[786]: fetched base config from "system" Sep 12 17:33:07.863057 ignition[786]: fetch: fetch complete Sep 12 17:33:07.862746 unknown[786]: fetched base config from "system" Sep 12 17:33:07.863061 ignition[786]: fetch: fetch passed Sep 12 17:33:07.862750 unknown[786]: fetched user config from "hetzner" Sep 12 17:33:07.863092 ignition[786]: Ignition finished successfully Sep 12 17:33:07.865630 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:33:07.875334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:33:07.887256 ignition[794]: Ignition 2.19.0 Sep 12 17:33:07.887265 ignition[794]: Stage: kargs Sep 12 17:33:07.887458 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:07.887467 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:07.888477 ignition[794]: kargs: kargs passed Sep 12 17:33:07.888536 ignition[794]: Ignition finished successfully Sep 12 17:33:07.890468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:33:07.896244 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:33:07.907181 ignition[801]: Ignition 2.19.0 Sep 12 17:33:07.907192 ignition[801]: Stage: disks Sep 12 17:33:07.907340 ignition[801]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:07.909313 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:33:07.907355 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:07.914324 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:33:07.908169 ignition[801]: disks: disks passed Sep 12 17:33:07.915352 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:33:07.908209 ignition[801]: Ignition finished successfully Sep 12 17:33:07.916646 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:33:07.917815 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:33:07.918759 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:33:07.926392 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:33:07.938832 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:33:07.941497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:33:07.947247 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:33:08.021366 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:33:08.021900 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:33:08.022788 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:33:08.028227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:33:08.030099 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:33:08.032257 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:33:08.035336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:33:08.036760 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:33:08.049693 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (818) Sep 12 17:33:08.049724 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:08.049737 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:08.049758 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:33:08.049771 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:33:08.049783 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:33:08.041509 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:33:08.051411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:33:08.055270 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:33:08.099974 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:33:08.107379 coreos-metadata[820]: Sep 12 17:33:08.106 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 17:33:08.107379 coreos-metadata[820]: Sep 12 17:33:08.107 INFO Fetch successful Sep 12 17:33:08.107379 coreos-metadata[820]: Sep 12 17:33:08.107 INFO wrote hostname ci-4081-3-6-2-340685d2b8 to /sysroot/etc/hostname Sep 12 17:33:08.110300 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:33:08.113208 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:33:08.113879 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:33:08.116034 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:33:08.181289 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:33:08.190287 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:33:08.194979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:33:08.201150 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:08.222170 ignition[934]: INFO : Ignition 2.19.0 Sep 12 17:33:08.222170 ignition[934]: INFO : Stage: mount Sep 12 17:33:08.225619 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:08.225619 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:08.225619 ignition[934]: INFO : mount: mount passed Sep 12 17:33:08.225619 ignition[934]: INFO : Ignition finished successfully Sep 12 17:33:08.226223 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:33:08.227737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:33:08.235269 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:33:08.458565 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:33:08.464318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:33:08.473142 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Sep 12 17:33:08.477032 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:08.477062 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:08.477072 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:33:08.482245 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:33:08.482273 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:33:08.485549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:33:08.509866 ignition[962]: INFO : Ignition 2.19.0 Sep 12 17:33:08.509866 ignition[962]: INFO : Stage: files Sep 12 17:33:08.511416 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:08.511416 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:08.513102 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:33:08.513102 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:33:08.513102 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:33:08.515679 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:33:08.516820 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:33:08.516820 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:33:08.516006 unknown[962]: wrote ssh authorized keys file for user: core Sep 12 17:33:08.519227 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:33:08.519227 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:33:08.701731 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:33:09.361490 systemd-networkd[778]: eth1: Gained IPv6LL Sep 12 17:33:09.425388 systemd-networkd[778]: eth0: Gained IPv6LL Sep 12 17:33:09.695736 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:33:09.695736 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:33:09.697954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:33:10.089972 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:33:10.435551 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:33:10.435551 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:33:10.437781 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:33:10.437781 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:33:10.437781 ignition[962]: INFO : files: files passed Sep 12 17:33:10.437781 ignition[962]: INFO : Ignition finished successfully Sep 12 17:33:10.439831 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:33:10.450330 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:33:10.453327 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:33:10.468400 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:33:10.468544 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:33:10.477209 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:10.477209 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:10.479790 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:10.479239 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:33:10.480869 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:33:10.488378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:33:10.508977 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:33:10.509091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:33:10.510642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:33:10.511530 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:33:10.512732 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:33:10.515269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:33:10.536034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:33:10.541454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:33:10.551555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:10.552533 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:10.553735 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:33:10.554852 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:33:10.555041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:33:10.556443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:33:10.557218 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:33:10.558489 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:33:10.559615 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:33:10.560735 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:33:10.561929 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:33:10.563136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:33:10.564344 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:33:10.565510 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:33:10.566622 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:33:10.567588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:33:10.567738 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:33:10.568848 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:10.569583 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:10.570574 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:33:10.572431 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:10.573049 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:33:10.573220 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:33:10.574531 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:33:10.574664 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:33:10.575879 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:33:10.576043 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:33:10.577092 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:33:10.577240 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:33:10.585723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:33:10.586831 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:33:10.587035 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:10.591501 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:33:10.595763 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:33:10.595990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:10.598944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:33:10.599229 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:33:10.612925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:33:10.613094 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:33:10.617721 ignition[1015]: INFO : Ignition 2.19.0 Sep 12 17:33:10.619691 ignition[1015]: INFO : Stage: umount Sep 12 17:33:10.620484 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:10.620484 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:33:10.623288 ignition[1015]: INFO : umount: umount passed Sep 12 17:33:10.623288 ignition[1015]: INFO : Ignition finished successfully Sep 12 17:33:10.625193 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:33:10.629202 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:33:10.629334 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:33:10.633059 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:33:10.633113 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:33:10.634115 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:33:10.634178 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:33:10.636618 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:33:10.636676 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:33:10.638197 systemd[1]: Stopped target network.target - Network. Sep 12 17:33:10.639283 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:33:10.639341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:33:10.642450 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:33:10.642982 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:33:10.645392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:10.646213 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:33:10.649459 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:33:10.651017 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:33:10.651072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:33:10.654164 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:33:10.654202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:33:10.658095 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:33:10.658159 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:33:10.658668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:33:10.658703 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:33:10.661480 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:33:10.662616 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:33:10.663956 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:33:10.664061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:33:10.665278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:33:10.665361 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:33:10.667185 systemd-networkd[778]: eth0: DHCPv6 lease lost Sep 12 17:33:10.672208 systemd-networkd[778]: eth1: DHCPv6 lease lost Sep 12 17:33:10.674885 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:33:10.675034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:33:10.676638 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:33:10.676749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:33:10.679593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:33:10.679635 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:10.685347 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:33:10.686482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:33:10.686569 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:33:10.687271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:33:10.687321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:10.688303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:33:10.688340 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:10.689533 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:33:10.689583 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:10.690979 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:10.701975 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:33:10.702101 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:33:10.706753 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:33:10.706885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:10.708295 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:33:10.708330 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:10.709355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:33:10.709385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:10.710580 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:33:10.710620 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:33:10.712480 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:33:10.712527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:33:10.713789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:33:10.713831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:10.719309 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:33:10.720045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:33:10.720090 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:10.720642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:10.720676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:10.726374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:33:10.726473 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:33:10.727932 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:33:10.736257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:33:10.741462 systemd[1]: Switching root. Sep 12 17:33:10.768957 systemd-journald[187]: Journal stopped Sep 12 17:33:11.621370 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 12 17:33:11.621431 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:33:11.621448 kernel: SELinux: policy capability open_perms=1 Sep 12 17:33:11.621458 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:33:11.621467 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:33:11.621476 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:33:11.621486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:33:11.621498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:33:11.621507 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:33:11.621516 kernel: audit: type=1403 audit(1757698390.886:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:33:11.621526 systemd[1]: Successfully loaded SELinux policy in 42.945ms. Sep 12 17:33:11.621541 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.297ms. Sep 12 17:33:11.621552 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:33:11.621564 systemd[1]: Detected virtualization kvm. Sep 12 17:33:11.621574 systemd[1]: Detected architecture x86-64. Sep 12 17:33:11.621585 systemd[1]: Detected first boot. Sep 12 17:33:11.621595 systemd[1]: Hostname set to . Sep 12 17:33:11.621609 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:33:11.621619 zram_generator::config[1058]: No configuration found. Sep 12 17:33:11.621630 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:33:11.621640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:33:11.621649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:33:11.621659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:33:11.621671 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:33:11.621681 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:33:11.621691 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:33:11.621701 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:33:11.621712 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:33:11.621722 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:33:11.621732 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:33:11.621742 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:33:11.621752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:11.621764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:11.621773 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:33:11.621784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:33:11.621794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:33:11.621804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:33:11.621814 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:33:11.621824 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:11.621837 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:33:11.621849 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:33:11.621859 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:33:11.621869 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:33:11.621879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:11.621889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:33:11.621899 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:33:11.621909 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:33:11.621920 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:33:11.621930 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:33:11.621941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:11.621951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:11.621961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:11.621970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:33:11.621980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:33:11.621991 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:33:11.622001 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:33:11.622013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:11.622023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:33:11.622033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:33:11.622043 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:33:11.622054 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:33:11.622064 systemd[1]: Reached target machines.target - Containers. Sep 12 17:33:11.622074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:33:11.622089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:11.622100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:33:11.622109 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:33:11.623196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:11.623212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:33:11.623222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:11.623233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:33:11.623246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:11.623257 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:33:11.623267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:33:11.623277 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:33:11.623286 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:33:11.623296 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:33:11.623306 kernel: loop: module loaded Sep 12 17:33:11.623316 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:33:11.623326 kernel: fuse: init (API version 7.39) Sep 12 17:33:11.623337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:33:11.623347 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:33:11.623356 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:33:11.623392 systemd-journald[1148]: Collecting audit messages is disabled. Sep 12 17:33:11.623411 systemd-journald[1148]: Journal started Sep 12 17:33:11.623432 systemd-journald[1148]: Runtime Journal (/run/log/journal/232e8ed4127440329a5b5b2bb142dc04) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:33:11.351101 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:33:11.370433 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:33:11.370895 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:33:11.631545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:33:11.631600 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:33:11.631619 systemd[1]: Stopped verity-setup.service. Sep 12 17:33:11.638268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:11.653747 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:33:11.644385 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:33:11.645012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:33:11.648537 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:33:11.649041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:33:11.649571 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:33:11.650097 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:33:11.650720 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:33:11.655046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:11.655950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:33:11.656091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:33:11.656811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:11.656911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:11.657576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:11.657678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:11.658751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:33:11.658852 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:33:11.659563 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:11.659716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:11.660734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:11.661477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:33:11.662136 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:33:11.664142 kernel: ACPI: bus type drm_connector registered Sep 12 17:33:11.665698 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:33:11.665809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:33:11.671465 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:33:11.677835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:33:11.682194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:33:11.683299 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:33:11.683330 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:33:11.685440 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:33:11.689229 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:33:11.696268 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:33:11.697335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:11.700232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:33:11.707643 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:33:11.708444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:11.715302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:33:11.715910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:11.722521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:33:11.727324 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:33:11.734641 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:33:11.738593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:33:11.739199 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:33:11.740349 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:33:11.752243 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:33:11.753319 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:33:11.763722 systemd-journald[1148]: Time spent on flushing to /var/log/journal/232e8ed4127440329a5b5b2bb142dc04 is 27.151ms for 1130 entries. Sep 12 17:33:11.763722 systemd-journald[1148]: System Journal (/var/log/journal/232e8ed4127440329a5b5b2bb142dc04) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:33:11.812303 systemd-journald[1148]: Received client request to flush runtime journal. Sep 12 17:33:11.812336 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:33:11.765511 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:33:11.789167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:11.800320 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:33:11.810440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:11.816759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:33:11.830232 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:33:11.832687 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:33:11.841364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:33:11.846398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:33:11.843879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:33:11.845880 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:33:11.877162 kernel: loop1: detected capacity change from 0 to 229808 Sep 12 17:33:11.900789 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 12 17:33:11.900824 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 12 17:33:11.912601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:11.924507 kernel: loop2: detected capacity change from 0 to 8 Sep 12 17:33:11.954176 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:33:11.995185 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 17:33:12.014219 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 17:33:12.034173 kernel: loop6: detected capacity change from 0 to 8 Sep 12 17:33:12.037153 kernel: loop7: detected capacity change from 0 to 140768 Sep 12 17:33:12.059378 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 17:33:12.062826 (sd-merge)[1203]: Merged extensions into '/usr'. Sep 12 17:33:12.069504 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:33:12.069518 systemd[1]: Reloading... Sep 12 17:33:12.133142 zram_generator::config[1229]: No configuration found. Sep 12 17:33:12.178868 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:33:12.253968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:12.306541 systemd[1]: Reloading finished in 236 ms. Sep 12 17:33:12.326953 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:33:12.328398 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:33:12.339691 systemd[1]: Starting ensure-sysext.service... Sep 12 17:33:12.344254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:12.352151 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:33:12.353159 systemd[1]: Reloading... Sep 12 17:33:12.362206 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:33:12.362800 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:33:12.363583 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:33:12.363894 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 17:33:12.363999 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 17:33:12.366205 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:12.366272 systemd-tmpfiles[1273]: Skipping /boot Sep 12 17:33:12.372935 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:12.373076 systemd-tmpfiles[1273]: Skipping /boot Sep 12 17:33:12.424944 zram_generator::config[1298]: No configuration found. Sep 12 17:33:12.538569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:12.590784 systemd[1]: Reloading finished in 237 ms. Sep 12 17:33:12.610457 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:33:12.615522 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:12.625403 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:12.628325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:33:12.634891 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:33:12.640448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:12.645580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:12.651006 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:33:12.653253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.653399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:12.656402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:12.660376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:12.663329 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:12.663897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:12.664027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.668701 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:33:12.671314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.671593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:12.671725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:12.671802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.674664 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 12 17:33:12.690163 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:33:12.691392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:12.691794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:12.696154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:12.696273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:12.701267 systemd[1]: Finished ensure-sysext.service. Sep 12 17:33:12.702656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.702792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:12.709671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:33:12.710639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:12.710713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:12.716558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:33:12.718395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:12.718916 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:33:12.720040 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:12.721002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:12.724076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:12.726891 augenrules[1376]: No rules Sep 12 17:33:12.729935 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:33:12.733294 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:12.734316 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:33:12.734511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:33:12.749979 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:33:12.753736 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:12.757381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:33:12.780694 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:33:12.791870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:33:12.796635 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:12.839220 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:33:12.877867 systemd-networkd[1391]: lo: Link UP Sep 12 17:33:12.877879 systemd-networkd[1391]: lo: Gained carrier Sep 12 17:33:12.878598 systemd-networkd[1391]: Enumeration completed Sep 12 17:33:12.878681 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:33:12.885374 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:33:12.894818 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:33:12.897846 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:33:12.902917 systemd-resolved[1355]: Positive Trust Anchors: Sep 12 17:33:12.903183 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:12.903354 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:12.910475 systemd-resolved[1355]: Using system hostname 'ci-4081-3-6-2-340685d2b8'. Sep 12 17:33:12.913526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:12.914428 systemd[1]: Reached target network.target - Network. Sep 12 17:33:12.915159 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:12.926188 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1400) Sep 12 17:33:12.943046 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:12.943055 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:12.944551 systemd-networkd[1391]: eth0: Link UP Sep 12 17:33:12.944645 systemd-networkd[1391]: eth0: Gained carrier Sep 12 17:33:12.944701 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:12.972946 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:12.972954 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:12.974220 systemd-networkd[1391]: eth1: Link UP Sep 12 17:33:12.974226 systemd-networkd[1391]: eth1: Gained carrier Sep 12 17:33:12.974237 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:12.987142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 17:33:12.999218 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:33:13.000006 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:13.008153 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:33:13.010231 systemd-networkd[1391]: eth0: DHCPv4 address 135.181.98.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:33:13.011256 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:13.011578 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:13.017903 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 12 17:33:13.017946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:13.018029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:13.029571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:13.033096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:13.035828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:13.036581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:13.036620 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:13.036635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:13.036974 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:13.037497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:13.043162 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:33:13.051891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:33:13.062150 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 12 17:33:13.063459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:33:13.066616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:13.066779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:13.068143 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:33:13.071494 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:13.071760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:13.076399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:13.076453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:13.088458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:33:13.096007 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:33:13.096399 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:33:13.096576 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:33:13.116428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:13.143146 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 12 17:33:13.143233 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 12 17:33:13.147482 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:33:13.148925 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:33:13.148963 kernel: [drm] features: -context_init Sep 12 17:33:13.151417 kernel: [drm] number of scanouts: 1 Sep 12 17:33:13.151450 kernel: [drm] number of cap sets: 0 Sep 12 17:33:13.153174 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 17:33:13.153713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:13.153880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:13.159247 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:33:13.159325 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:33:13.162550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:13.166159 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:33:13.174293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:13.174468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:13.182439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:13.220609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:13.277133 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:33:13.280343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:33:13.292179 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:13.319803 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:33:13.320066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:13.320149 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:33:13.320279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:33:13.320375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:33:13.320574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:33:13.320696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:33:13.320756 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:33:13.320832 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:33:13.320853 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:13.320894 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:13.322836 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:33:13.324549 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:33:13.340544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:33:13.341974 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:33:13.344181 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:33:13.344999 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:13.345076 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:33:13.345762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:13.345787 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:13.348234 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:33:13.350638 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:13.358312 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:33:13.363152 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:33:13.367271 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:33:13.373066 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:33:13.374824 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:33:13.379169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:33:13.387385 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:33:13.392471 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 17:33:13.399637 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:33:13.407340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:33:13.414570 dbus-daemon[1464]: [system] SELinux support is enabled Sep 12 17:33:13.417375 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:33:13.420506 jq[1465]: false Sep 12 17:33:13.419873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:33:13.421643 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:33:13.422008 extend-filesystems[1468]: Found loop4 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found loop5 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found loop6 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found loop7 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda1 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda2 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda3 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found usr Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda4 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda6 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda7 Sep 12 17:33:13.422008 extend-filesystems[1468]: Found sda9 Sep 12 17:33:13.422008 extend-filesystems[1468]: Checking size of /dev/sda9 Sep 12 17:33:13.486041 extend-filesystems[1468]: Resized partition /dev/sda9 Sep 12 17:33:13.491115 coreos-metadata[1463]: Sep 12 17:33:13.476 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 17:33:13.491115 coreos-metadata[1463]: Sep 12 17:33:13.478 INFO Fetch successful Sep 12 17:33:13.491115 coreos-metadata[1463]: Sep 12 17:33:13.478 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 17:33:13.491115 coreos-metadata[1463]: Sep 12 17:33:13.479 INFO Fetch successful Sep 12 17:33:13.515385 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 17:33:13.424541 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:33:13.515498 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:33:13.439227 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:33:13.449881 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:33:13.465001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:33:13.520715 jq[1478]: true Sep 12 17:33:13.480760 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:33:13.480931 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:33:13.482847 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:33:13.482981 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:33:13.503974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:33:13.504023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:33:13.510310 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:33:13.510346 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:33:13.542266 update_engine[1476]: I20250912 17:33:13.531495 1476 main.cc:92] Flatcar Update Engine starting Sep 12 17:33:13.542266 update_engine[1476]: I20250912 17:33:13.536849 1476 update_check_scheduler.cc:74] Next update check in 6m4s Sep 12 17:33:13.538107 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:33:13.547305 jq[1490]: true Sep 12 17:33:13.549004 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:33:13.550477 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:33:13.559728 tar[1487]: linux-amd64/LICENSE Sep 12 17:33:13.559728 tar[1487]: linux-amd64/helm Sep 12 17:33:13.560565 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:33:13.560818 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:33:13.566526 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1395) Sep 12 17:33:13.598350 systemd-logind[1475]: New seat seat0. Sep 12 17:33:13.601703 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:33:13.601723 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:33:13.602435 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:33:13.659177 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 17:33:13.664243 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:33:13.681816 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:13.691253 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:33:13.691253 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 17:33:13.691253 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 17:33:13.690492 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:33:13.709593 extend-filesystems[1468]: Resized filesystem in /dev/sda9 Sep 12 17:33:13.709593 extend-filesystems[1468]: Found sr0 Sep 12 17:33:13.690683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:33:13.695149 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:33:13.712646 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:33:13.719521 systemd[1]: Starting sshkeys.service... Sep 12 17:33:13.748691 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:33:13.761023 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:33:13.800814 coreos-metadata[1539]: Sep 12 17:33:13.799 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 17:33:13.800814 coreos-metadata[1539]: Sep 12 17:33:13.800 INFO Fetch successful Sep 12 17:33:13.802180 unknown[1539]: wrote ssh authorized keys file for user: core Sep 12 17:33:13.827526 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:13.828182 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:33:13.834528 systemd[1]: Finished sshkeys.service. Sep 12 17:33:13.856879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:33:13.878160 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:33:13.932523 containerd[1494]: time="2025-09-12T17:33:13.932384255Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:33:14.005217 containerd[1494]: time="2025-09-12T17:33:14.004670526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.007880 containerd[1494]: time="2025-09-12T17:33:14.007837835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:14.008481 containerd[1494]: time="2025-09-12T17:33:14.008459080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:33:14.008619 containerd[1494]: time="2025-09-12T17:33:14.008601577Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:33:14.008794 containerd[1494]: time="2025-09-12T17:33:14.008780793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:33:14.009168 containerd[1494]: time="2025-09-12T17:33:14.009152330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.009314 containerd[1494]: time="2025-09-12T17:33:14.009282704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:14.009421 containerd[1494]: time="2025-09-12T17:33:14.009403521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009708653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009727228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009739280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009747536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009811045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.009974762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.010061275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.010073758Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.010204633Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:33:14.011086 containerd[1494]: time="2025-09-12T17:33:14.010242955Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:33:14.014807 containerd[1494]: time="2025-09-12T17:33:14.014758312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:33:14.014912 containerd[1494]: time="2025-09-12T17:33:14.014895019Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:33:14.015021 containerd[1494]: time="2025-09-12T17:33:14.015008401Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:33:14.015072 containerd[1494]: time="2025-09-12T17:33:14.015062834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:33:14.015141 containerd[1494]: time="2025-09-12T17:33:14.015128938Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:33:14.015286 containerd[1494]: time="2025-09-12T17:33:14.015271685Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:33:14.015880 containerd[1494]: time="2025-09-12T17:33:14.015854418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:33:14.016200 containerd[1494]: time="2025-09-12T17:33:14.016182433Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:33:14.016331 containerd[1494]: time="2025-09-12T17:33:14.016314962Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:33:14.016532 containerd[1494]: time="2025-09-12T17:33:14.016521850Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:33:14.016576 containerd[1494]: time="2025-09-12T17:33:14.016567685Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.016620 containerd[1494]: time="2025-09-12T17:33:14.016611547Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.016661 containerd[1494]: time="2025-09-12T17:33:14.016652885Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.016702 containerd[1494]: time="2025-09-12T17:33:14.016693572Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.016777 containerd[1494]: time="2025-09-12T17:33:14.016766508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016828805Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016845557Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016857889Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016877536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016890671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016905238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016921849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016938300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016956214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016970641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.016990789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.017008672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.017029070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017408 containerd[1494]: time="2025-09-12T17:33:14.017044249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017057875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017105714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017140269Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017165005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017175946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.017654 containerd[1494]: time="2025-09-12T17:33:14.017185664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018252074Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018276930Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018289324Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018381306Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018390954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018402245Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018411262Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:33:14.019776 containerd[1494]: time="2025-09-12T17:33:14.018420570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:33:14.019939 containerd[1494]: time="2025-09-12T17:33:14.018724179Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:33:14.019939 containerd[1494]: time="2025-09-12T17:33:14.018788921Z" level=info msg="Connect containerd service" Sep 12 17:33:14.019939 containerd[1494]: time="2025-09-12T17:33:14.018816001Z" level=info msg="using legacy CRI server" Sep 12 17:33:14.019939 containerd[1494]: time="2025-09-12T17:33:14.018823345Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:33:14.019939 containerd[1494]: time="2025-09-12T17:33:14.018915518Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:33:14.021040 containerd[1494]: time="2025-09-12T17:33:14.021019954Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:33:14.021461 containerd[1494]: time="2025-09-12T17:33:14.021432107Z" level=info msg="Start subscribing containerd event" Sep 12 17:33:14.022021 containerd[1494]: time="2025-09-12T17:33:14.021857665Z" level=info msg="Start recovering state" Sep 12 17:33:14.022021 containerd[1494]: time="2025-09-12T17:33:14.021911055Z" level=info msg="Start event monitor" Sep 12 17:33:14.022021 containerd[1494]: time="2025-09-12T17:33:14.021923879Z" level=info msg="Start snapshots syncer" Sep 12 17:33:14.022021 containerd[1494]: time="2025-09-12T17:33:14.021931503Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:33:14.022021 containerd[1494]: time="2025-09-12T17:33:14.021938667Z" level=info msg="Start streaming server" Sep 12 17:33:14.022745 containerd[1494]: time="2025-09-12T17:33:14.022731915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:33:14.022866 containerd[1494]: time="2025-09-12T17:33:14.022854394Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:33:14.023126 containerd[1494]: time="2025-09-12T17:33:14.023100796Z" level=info msg="containerd successfully booted in 0.093630s" Sep 12 17:33:14.023188 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:33:14.209557 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:33:14.238970 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:33:14.251081 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:33:14.262408 systemd[1]: Started sshd@0-135.181.98.85:22-80.94.95.116:15792.service - OpenSSH per-connection server daemon (80.94.95.116:15792). Sep 12 17:33:14.266767 systemd[1]: Started sshd@1-135.181.98.85:22-147.75.109.163:45238.service - OpenSSH per-connection server daemon (147.75.109.163:45238). Sep 12 17:33:14.280104 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:33:14.280264 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:33:14.291472 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:33:14.301169 tar[1487]: linux-amd64/README.md Sep 12 17:33:14.307637 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:33:14.309764 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:33:14.317406 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:33:14.320229 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:33:14.320931 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:33:14.609267 systemd-networkd[1391]: eth0: Gained IPv6LL Sep 12 17:33:14.610566 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:14.612205 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:33:14.615522 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:33:14.623346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:14.626842 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:33:14.647023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:33:14.673277 systemd-networkd[1391]: eth1: Gained IPv6LL Sep 12 17:33:14.673815 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:15.244257 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 45238 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:15.247240 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:15.257487 systemd-logind[1475]: New session 1 of user core. Sep 12 17:33:15.259366 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:33:15.267499 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:33:15.284062 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:33:15.295888 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:33:15.299550 (systemd)[1598]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:33:15.411611 systemd[1598]: Queued start job for default target default.target. Sep 12 17:33:15.421301 systemd[1598]: Created slice app.slice - User Application Slice. Sep 12 17:33:15.421418 systemd[1598]: Reached target paths.target - Paths. Sep 12 17:33:15.421433 systemd[1598]: Reached target timers.target - Timers. Sep 12 17:33:15.424240 systemd[1598]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:33:15.433507 systemd[1598]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:33:15.433729 systemd[1598]: Reached target sockets.target - Sockets. Sep 12 17:33:15.433750 systemd[1598]: Reached target basic.target - Basic System. Sep 12 17:33:15.433791 systemd[1598]: Reached target default.target - Main User Target. Sep 12 17:33:15.433821 systemd[1598]: Startup finished in 127ms. Sep 12 17:33:15.434025 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:33:15.443393 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:33:15.459595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:15.463724 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:33:15.464627 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:15.469185 systemd[1]: Startup finished in 1.304s (kernel) + 6.186s (initrd) + 4.624s (userspace) = 12.116s. Sep 12 17:33:15.855325 sshd[1570]: Connection closed by authenticating user root 80.94.95.116 port 15792 [preauth] Sep 12 17:33:15.856709 systemd[1]: sshd@0-135.181.98.85:22-80.94.95.116:15792.service: Deactivated successfully. Sep 12 17:33:15.994166 kubelet[1612]: E0912 17:33:15.994031 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:15.996377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:15.996495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:15.996926 systemd[1]: kubelet.service: Consumed 1.001s CPU time. Sep 12 17:33:16.172492 systemd[1]: Started sshd@2-135.181.98.85:22-147.75.109.163:45250.service - OpenSSH per-connection server daemon (147.75.109.163:45250). Sep 12 17:33:17.241519 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 45250 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:17.242833 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:17.246742 systemd-logind[1475]: New session 2 of user core. Sep 12 17:33:17.253319 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:33:17.985354 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:17.988532 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:33:17.989230 systemd[1]: sshd@2-135.181.98.85:22-147.75.109.163:45250.service: Deactivated successfully. Sep 12 17:33:17.990881 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:33:17.991652 systemd-logind[1475]: Removed session 2. Sep 12 17:33:18.132519 systemd[1]: Started sshd@3-135.181.98.85:22-147.75.109.163:45266.service - OpenSSH per-connection server daemon (147.75.109.163:45266). Sep 12 17:33:19.096715 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 45266 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:19.097931 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:19.101904 systemd-logind[1475]: New session 3 of user core. Sep 12 17:33:19.111359 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:33:19.764290 sshd[1634]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:19.766977 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:33:19.767586 systemd[1]: sshd@3-135.181.98.85:22-147.75.109.163:45266.service: Deactivated successfully. Sep 12 17:33:19.769110 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:33:19.770250 systemd-logind[1475]: Removed session 3. Sep 12 17:33:19.930270 systemd[1]: Started sshd@4-135.181.98.85:22-147.75.109.163:42872.service - OpenSSH per-connection server daemon (147.75.109.163:42872). Sep 12 17:33:20.897615 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 42872 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:20.898940 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:20.903513 systemd-logind[1475]: New session 4 of user core. Sep 12 17:33:20.908430 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:33:21.572948 sshd[1641]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:21.575312 systemd[1]: sshd@4-135.181.98.85:22-147.75.109.163:42872.service: Deactivated successfully. Sep 12 17:33:21.577371 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:33:21.578069 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:33:21.578967 systemd-logind[1475]: Removed session 4. Sep 12 17:33:21.739771 systemd[1]: Started sshd@5-135.181.98.85:22-147.75.109.163:42888.service - OpenSSH per-connection server daemon (147.75.109.163:42888). Sep 12 17:33:22.705018 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 42888 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:22.706429 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:22.710707 systemd-logind[1475]: New session 5 of user core. Sep 12 17:33:22.712311 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:33:23.226325 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:33:23.226611 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:23.240934 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:23.398382 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:23.402111 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:33:23.402302 systemd[1]: sshd@5-135.181.98.85:22-147.75.109.163:42888.service: Deactivated successfully. Sep 12 17:33:23.403838 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:33:23.404650 systemd-logind[1475]: Removed session 5. Sep 12 17:33:23.567712 systemd[1]: Started sshd@6-135.181.98.85:22-147.75.109.163:42892.service - OpenSSH per-connection server daemon (147.75.109.163:42892). Sep 12 17:33:24.532498 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 42892 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:24.533757 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:24.538300 systemd-logind[1475]: New session 6 of user core. Sep 12 17:33:24.550380 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:33:25.048426 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:33:25.048711 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:25.052388 sudo[1660]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:25.057617 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:33:25.057906 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:25.079469 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:25.080967 auditctl[1663]: No rules Sep 12 17:33:25.081397 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:33:25.081607 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:25.084075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:25.111424 augenrules[1681]: No rules Sep 12 17:33:25.112210 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:25.114211 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:25.271940 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:25.275007 systemd[1]: sshd@6-135.181.98.85:22-147.75.109.163:42892.service: Deactivated successfully. Sep 12 17:33:25.276435 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:33:25.276988 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:33:25.277806 systemd-logind[1475]: Removed session 6. Sep 12 17:33:25.442561 systemd[1]: Started sshd@7-135.181.98.85:22-147.75.109.163:42908.service - OpenSSH per-connection server daemon (147.75.109.163:42908). Sep 12 17:33:26.235041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:33:26.240601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:26.341345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:26.345096 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:26.387874 kubelet[1699]: E0912 17:33:26.387810 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:26.391580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:26.391755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:26.411147 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 42908 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:26.412343 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:26.416870 systemd-logind[1475]: New session 7 of user core. Sep 12 17:33:26.423264 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:33:26.927150 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:33:26.927389 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:27.205337 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:33:27.206957 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:33:27.480362 dockerd[1722]: time="2025-09-12T17:33:27.480231645Z" level=info msg="Starting up" Sep 12 17:33:27.546107 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1273755106-merged.mount: Deactivated successfully. Sep 12 17:33:27.576825 dockerd[1722]: time="2025-09-12T17:33:27.576778099Z" level=info msg="Loading containers: start." Sep 12 17:33:27.667157 kernel: Initializing XFRM netlink socket Sep 12 17:33:27.690592 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 12 17:33:27.719595 systemd-timesyncd[1374]: Contacted time server 128.140.109.119:123 (2.flatcar.pool.ntp.org). Sep 12 17:33:27.720264 systemd-timesyncd[1374]: Initial clock synchronization to Fri 2025-09-12 17:33:28.016988 UTC. Sep 12 17:33:27.741193 systemd-networkd[1391]: docker0: Link UP Sep 12 17:33:27.756264 dockerd[1722]: time="2025-09-12T17:33:27.756206643Z" level=info msg="Loading containers: done." Sep 12 17:33:27.770996 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1911909593-merged.mount: Deactivated successfully. Sep 12 17:33:27.774588 dockerd[1722]: time="2025-09-12T17:33:27.774500237Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:33:27.774739 dockerd[1722]: time="2025-09-12T17:33:27.774636944Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:33:27.774739 dockerd[1722]: time="2025-09-12T17:33:27.774729197Z" level=info msg="Daemon has completed initialization" Sep 12 17:33:27.803623 dockerd[1722]: time="2025-09-12T17:33:27.803541506Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:33:27.803950 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:33:28.944785 containerd[1494]: time="2025-09-12T17:33:28.944497504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:33:29.475787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376976938.mount: Deactivated successfully. Sep 12 17:33:30.543821 containerd[1494]: time="2025-09-12T17:33:30.543733428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.545185 containerd[1494]: time="2025-09-12T17:33:30.545119780Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Sep 12 17:33:30.545717 containerd[1494]: time="2025-09-12T17:33:30.545677172Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.548505 containerd[1494]: time="2025-09-12T17:33:30.548467968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.549810 containerd[1494]: time="2025-09-12T17:33:30.549591894Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.605049956s" Sep 12 17:33:30.549810 containerd[1494]: time="2025-09-12T17:33:30.549624218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:33:30.550602 containerd[1494]: time="2025-09-12T17:33:30.550569902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:33:31.880828 containerd[1494]: time="2025-09-12T17:33:31.880739396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.882376 containerd[1494]: time="2025-09-12T17:33:31.882111318Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Sep 12 17:33:31.883265 containerd[1494]: time="2025-09-12T17:33:31.883210599Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.885977 containerd[1494]: time="2025-09-12T17:33:31.885926273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.887780 containerd[1494]: time="2025-09-12T17:33:31.886926443Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.336325794s" Sep 12 17:33:31.887780 containerd[1494]: time="2025-09-12T17:33:31.886961274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:33:31.888119 containerd[1494]: time="2025-09-12T17:33:31.888069130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:33:32.980411 containerd[1494]: time="2025-09-12T17:33:32.980342077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.981380 containerd[1494]: time="2025-09-12T17:33:32.981339289Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Sep 12 17:33:32.982236 containerd[1494]: time="2025-09-12T17:33:32.981873746Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.984580 containerd[1494]: time="2025-09-12T17:33:32.984520306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.985769 containerd[1494]: time="2025-09-12T17:33:32.985661310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.097545886s" Sep 12 17:33:32.985769 containerd[1494]: time="2025-09-12T17:33:32.985689503Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:33:32.989044 containerd[1494]: time="2025-09-12T17:33:32.988977776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:33:34.033398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701274965.mount: Deactivated successfully. Sep 12 17:33:34.386471 containerd[1494]: time="2025-09-12T17:33:34.386323779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.387436 containerd[1494]: time="2025-09-12T17:33:34.387338281Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Sep 12 17:33:34.389040 containerd[1494]: time="2025-09-12T17:33:34.388163708Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.389848 containerd[1494]: time="2025-09-12T17:33:34.389816764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.390545 containerd[1494]: time="2025-09-12T17:33:34.390300612Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.401293746s" Sep 12 17:33:34.390545 containerd[1494]: time="2025-09-12T17:33:34.390367768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:33:34.390883 containerd[1494]: time="2025-09-12T17:33:34.390765372Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:33:34.887529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112497707.mount: Deactivated successfully. Sep 12 17:33:35.818373 containerd[1494]: time="2025-09-12T17:33:35.818319200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.819365 containerd[1494]: time="2025-09-12T17:33:35.819336254Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Sep 12 17:33:35.820403 containerd[1494]: time="2025-09-12T17:33:35.819998880Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.822966 containerd[1494]: time="2025-09-12T17:33:35.822546734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.823635 containerd[1494]: time="2025-09-12T17:33:35.823607936Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.432822413s" Sep 12 17:33:35.823680 containerd[1494]: time="2025-09-12T17:33:35.823639418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:33:35.824333 containerd[1494]: time="2025-09-12T17:33:35.824227543Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:33:36.282168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214943491.mount: Deactivated successfully. Sep 12 17:33:36.288108 containerd[1494]: time="2025-09-12T17:33:36.288059994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:36.288950 containerd[1494]: time="2025-09-12T17:33:36.288863453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 12 17:33:36.289823 containerd[1494]: time="2025-09-12T17:33:36.289777807Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:36.292005 containerd[1494]: time="2025-09-12T17:33:36.291960274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:36.292543 containerd[1494]: time="2025-09-12T17:33:36.292512821Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.261739ms" Sep 12 17:33:36.292593 containerd[1494]: time="2025-09-12T17:33:36.292544478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:33:36.293734 containerd[1494]: time="2025-09-12T17:33:36.293509546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:33:36.484912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:33:36.491303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:36.594280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:36.594572 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:36.626629 kubelet[1999]: E0912 17:33:36.626561 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:36.628427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:36.628549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:36.729489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940901620.mount: Deactivated successfully. Sep 12 17:33:38.428234 containerd[1494]: time="2025-09-12T17:33:38.428180677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:38.430103 containerd[1494]: time="2025-09-12T17:33:38.429716160Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Sep 12 17:33:38.430103 containerd[1494]: time="2025-09-12T17:33:38.430048882Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:38.434322 containerd[1494]: time="2025-09-12T17:33:38.434253624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:38.435951 containerd[1494]: time="2025-09-12T17:33:38.435707818Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.142122114s" Sep 12 17:33:38.435951 containerd[1494]: time="2025-09-12T17:33:38.435746566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:33:42.328934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:42.339340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:42.366539 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-7.scope)... Sep 12 17:33:42.366551 systemd[1]: Reloading... Sep 12 17:33:42.467245 zram_generator::config[2137]: No configuration found. Sep 12 17:33:42.570737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:42.652874 systemd[1]: Reloading finished in 286 ms. Sep 12 17:33:42.695805 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:33:42.695894 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:33:42.696196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:42.703794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:42.803293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:42.811571 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:33:42.863927 kubelet[2186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:42.864392 kubelet[2186]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:33:42.864392 kubelet[2186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:42.867301 kubelet[2186]: I0912 17:33:42.866399 2186 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:33:43.363108 kubelet[2186]: I0912 17:33:43.362733 2186 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:33:43.363108 kubelet[2186]: I0912 17:33:43.362773 2186 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:33:43.364086 kubelet[2186]: I0912 17:33:43.364074 2186 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:33:43.391088 kubelet[2186]: I0912 17:33:43.391035 2186 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:33:43.394510 kubelet[2186]: E0912 17:33:43.394439 2186 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://135.181.98.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:33:43.405843 kubelet[2186]: E0912 17:33:43.405784 2186 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:33:43.405843 kubelet[2186]: I0912 17:33:43.405823 2186 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:33:43.410092 kubelet[2186]: I0912 17:33:43.410069 2186 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:33:43.413630 kubelet[2186]: I0912 17:33:43.413571 2186 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:33:43.416355 kubelet[2186]: I0912 17:33:43.413600 2186 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-2-340685d2b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:33:43.416355 kubelet[2186]: I0912 17:33:43.416346 2186 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:33:43.416355 kubelet[2186]: I0912 17:33:43.416376 2186 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:33:43.416557 kubelet[2186]: I0912 17:33:43.416517 2186 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:43.419959 kubelet[2186]: I0912 17:33:43.419240 2186 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:33:43.419959 kubelet[2186]: I0912 17:33:43.419260 2186 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:33:43.419959 kubelet[2186]: I0912 17:33:43.419282 2186 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:33:43.419959 kubelet[2186]: I0912 17:33:43.419309 2186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:33:43.427279 kubelet[2186]: E0912 17:33:43.427240 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.98.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-2-340685d2b8&limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:33:43.429094 kubelet[2186]: I0912 17:33:43.427987 2186 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:33:43.429094 kubelet[2186]: I0912 17:33:43.428455 2186 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:33:43.430529 kubelet[2186]: W0912 17:33:43.429747 2186 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:33:43.436133 kubelet[2186]: E0912 17:33:43.436003 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.98.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:33:43.442172 kubelet[2186]: I0912 17:33:43.442097 2186 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:33:43.442299 kubelet[2186]: I0912 17:33:43.442191 2186 server.go:1289] "Started kubelet" Sep 12 17:33:43.446993 kubelet[2186]: I0912 17:33:43.445657 2186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:33:43.448432 kubelet[2186]: E0912 17:33:43.446863 2186 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.98.85:6443/api/v1/namespaces/default/events\": dial tcp 135.181.98.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-2-340685d2b8.18649967478866ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-2-340685d2b8,UID:ci-4081-3-6-2-340685d2b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-2-340685d2b8,},FirstTimestamp:2025-09-12 17:33:43.442138861 +0000 UTC m=+0.626587231,LastTimestamp:2025-09-12 17:33:43.442138861 +0000 UTC m=+0.626587231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-2-340685d2b8,}" Sep 12 17:33:43.449773 kubelet[2186]: I0912 17:33:43.449749 2186 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:33:43.452181 kubelet[2186]: I0912 17:33:43.452165 2186 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:33:43.457347 kubelet[2186]: I0912 17:33:43.457275 2186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:33:43.457746 kubelet[2186]: I0912 17:33:43.457728 2186 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:33:43.458298 kubelet[2186]: I0912 17:33:43.458284 2186 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:33:43.459902 kubelet[2186]: I0912 17:33:43.459875 2186 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:33:43.460265 kubelet[2186]: E0912 17:33:43.460234 2186 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-2-340685d2b8\" not found" Sep 12 17:33:43.461502 kubelet[2186]: I0912 17:33:43.461478 2186 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:33:43.461598 kubelet[2186]: I0912 17:33:43.461564 2186 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:33:43.461977 kubelet[2186]: E0912 17:33:43.461948 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.98.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:33:43.462100 kubelet[2186]: E0912 17:33:43.462069 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.98.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-2-340685d2b8?timeout=10s\": dial tcp 135.181.98.85:6443: connect: connection refused" interval="200ms" Sep 12 17:33:43.462650 kubelet[2186]: I0912 17:33:43.462627 2186 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:33:43.462719 kubelet[2186]: I0912 17:33:43.462697 2186 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:33:43.463378 kubelet[2186]: E0912 17:33:43.463355 2186 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:33:43.464493 kubelet[2186]: I0912 17:33:43.464469 2186 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:33:43.476374 kubelet[2186]: I0912 17:33:43.476335 2186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:33:43.478548 kubelet[2186]: I0912 17:33:43.478523 2186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:33:43.478665 kubelet[2186]: I0912 17:33:43.478657 2186 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:33:43.478775 kubelet[2186]: I0912 17:33:43.478766 2186 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:33:43.478826 kubelet[2186]: I0912 17:33:43.478820 2186 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:33:43.478935 kubelet[2186]: E0912 17:33:43.478909 2186 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:33:43.487259 kubelet[2186]: E0912 17:33:43.486976 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.98.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:33:43.490597 kubelet[2186]: I0912 17:33:43.490577 2186 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:33:43.490784 kubelet[2186]: I0912 17:33:43.490709 2186 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:33:43.490784 kubelet[2186]: I0912 17:33:43.490724 2186 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:43.492504 kubelet[2186]: I0912 17:33:43.492493 2186 policy_none.go:49] "None policy: Start" Sep 12 17:33:43.492568 kubelet[2186]: I0912 17:33:43.492562 2186 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:33:43.492613 kubelet[2186]: I0912 17:33:43.492608 2186 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:33:43.498690 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:33:43.507666 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:33:43.510194 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:33:43.520168 kubelet[2186]: E0912 17:33:43.519805 2186 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:33:43.520168 kubelet[2186]: I0912 17:33:43.520022 2186 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:33:43.520168 kubelet[2186]: I0912 17:33:43.520032 2186 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:33:43.520742 kubelet[2186]: I0912 17:33:43.520628 2186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:33:43.522117 kubelet[2186]: E0912 17:33:43.522055 2186 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:33:43.522117 kubelet[2186]: E0912 17:33:43.522090 2186 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-2-340685d2b8\" not found" Sep 12 17:33:43.591185 systemd[1]: Created slice kubepods-burstable-poddd0f9f6eaaab6078f218b809689863ae.slice - libcontainer container kubepods-burstable-poddd0f9f6eaaab6078f218b809689863ae.slice. Sep 12 17:33:43.598178 kubelet[2186]: E0912 17:33:43.598078 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.602587 systemd[1]: Created slice kubepods-burstable-podc3d7c8576a9e26963b51cadc993e654b.slice - libcontainer container kubepods-burstable-podc3d7c8576a9e26963b51cadc993e654b.slice. Sep 12 17:33:43.613682 kubelet[2186]: E0912 17:33:43.613458 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.616592 systemd[1]: Created slice kubepods-burstable-podd05c13e438a231f9a5807f8e4f1ba738.slice - libcontainer container kubepods-burstable-podd05c13e438a231f9a5807f8e4f1ba738.slice. Sep 12 17:33:43.618247 kubelet[2186]: E0912 17:33:43.618223 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.621765 kubelet[2186]: I0912 17:33:43.621739 2186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.622110 kubelet[2186]: E0912 17:33:43.622071 2186 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.98.85:6443/api/v1/nodes\": dial tcp 135.181.98.85:6443: connect: connection refused" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.662800 kubelet[2186]: I0912 17:33:43.662759 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663208 kubelet[2186]: I0912 17:33:43.663183 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663394 kubelet[2186]: I0912 17:33:43.663212 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663394 kubelet[2186]: I0912 17:33:43.663230 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3d7c8576a9e26963b51cadc993e654b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-2-340685d2b8\" (UID: \"c3d7c8576a9e26963b51cadc993e654b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663394 kubelet[2186]: E0912 17:33:43.662972 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.98.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-2-340685d2b8?timeout=10s\": dial tcp 135.181.98.85:6443: connect: connection refused" interval="400ms" Sep 12 17:33:43.663394 kubelet[2186]: I0912 17:33:43.663246 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663394 kubelet[2186]: I0912 17:33:43.663278 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663507 kubelet[2186]: I0912 17:33:43.663294 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663507 kubelet[2186]: I0912 17:33:43.663315 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.663507 kubelet[2186]: I0912 17:33:43.663333 2186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.824327 kubelet[2186]: I0912 17:33:43.824272 2186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.824626 kubelet[2186]: E0912 17:33:43.824588 2186 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.98.85:6443/api/v1/nodes\": dial tcp 135.181.98.85:6443: connect: connection refused" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:43.899875 containerd[1494]: time="2025-09-12T17:33:43.899740443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-2-340685d2b8,Uid:dd0f9f6eaaab6078f218b809689863ae,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:43.919133 containerd[1494]: time="2025-09-12T17:33:43.919063348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-2-340685d2b8,Uid:c3d7c8576a9e26963b51cadc993e654b,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:43.920032 containerd[1494]: time="2025-09-12T17:33:43.919998826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-2-340685d2b8,Uid:d05c13e438a231f9a5807f8e4f1ba738,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:44.064480 kubelet[2186]: E0912 17:33:44.064418 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.98.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-2-340685d2b8?timeout=10s\": dial tcp 135.181.98.85:6443: connect: connection refused" interval="800ms" Sep 12 17:33:44.231242 kubelet[2186]: I0912 17:33:44.229708 2186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:44.231242 kubelet[2186]: E0912 17:33:44.230143 2186 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.98.85:6443/api/v1/nodes\": dial tcp 135.181.98.85:6443: connect: connection refused" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:44.319174 kubelet[2186]: E0912 17:33:44.319098 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.98.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:33:44.353113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385383276.mount: Deactivated successfully. Sep 12 17:33:44.361090 containerd[1494]: time="2025-09-12T17:33:44.361004284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:44.362027 containerd[1494]: time="2025-09-12T17:33:44.361982337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 12 17:33:44.362937 containerd[1494]: time="2025-09-12T17:33:44.362904783Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:44.363824 containerd[1494]: time="2025-09-12T17:33:44.363780880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:44.364679 containerd[1494]: time="2025-09-12T17:33:44.364630150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:33:44.365810 containerd[1494]: time="2025-09-12T17:33:44.365773303Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:44.366345 containerd[1494]: time="2025-09-12T17:33:44.366287602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:33:44.371159 containerd[1494]: time="2025-09-12T17:33:44.369582110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:44.371159 containerd[1494]: time="2025-09-12T17:33:44.370373892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.532934ms" Sep 12 17:33:44.373114 containerd[1494]: time="2025-09-12T17:33:44.372996637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.937173ms" Sep 12 17:33:44.373654 containerd[1494]: time="2025-09-12T17:33:44.373615781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.457758ms" Sep 12 17:33:44.504048 containerd[1494]: time="2025-09-12T17:33:44.503956122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:44.504752 containerd[1494]: time="2025-09-12T17:33:44.504011819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:44.505385 containerd[1494]: time="2025-09-12T17:33:44.505175118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.505385 containerd[1494]: time="2025-09-12T17:33:44.505346076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.511881 containerd[1494]: time="2025-09-12T17:33:44.510206928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:44.511881 containerd[1494]: time="2025-09-12T17:33:44.510271340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:44.511881 containerd[1494]: time="2025-09-12T17:33:44.510300723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.511881 containerd[1494]: time="2025-09-12T17:33:44.510426719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.517585 containerd[1494]: time="2025-09-12T17:33:44.517441724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:44.517776 containerd[1494]: time="2025-09-12T17:33:44.517600102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:44.517776 containerd[1494]: time="2025-09-12T17:33:44.517622482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.517776 containerd[1494]: time="2025-09-12T17:33:44.517712855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:44.534057 kubelet[2186]: E0912 17:33:44.533977 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.98.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-2-340685d2b8&limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:33:44.546174 systemd[1]: Started cri-containerd-8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2.scope - libcontainer container 8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2. Sep 12 17:33:44.566465 systemd[1]: Started cri-containerd-2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde.scope - libcontainer container 2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde. Sep 12 17:33:44.568785 systemd[1]: Started cri-containerd-3ba33610dacf8d5b2b9087f68b3197c6534ad69a6fc9a8ab4b98e6cbfa5ce113.scope - libcontainer container 3ba33610dacf8d5b2b9087f68b3197c6534ad69a6fc9a8ab4b98e6cbfa5ce113. Sep 12 17:33:44.619601 kubelet[2186]: E0912 17:33:44.619148 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.98.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:33:44.619943 containerd[1494]: time="2025-09-12T17:33:44.619912555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-2-340685d2b8,Uid:c3d7c8576a9e26963b51cadc993e654b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2\"" Sep 12 17:33:44.635934 containerd[1494]: time="2025-09-12T17:33:44.635901347Z" level=info msg="CreateContainer within sandbox \"8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:33:44.638315 containerd[1494]: time="2025-09-12T17:33:44.638176162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-2-340685d2b8,Uid:dd0f9f6eaaab6078f218b809689863ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde\"" Sep 12 17:33:44.642971 containerd[1494]: time="2025-09-12T17:33:44.642817524Z" level=info msg="CreateContainer within sandbox \"2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:33:44.650288 containerd[1494]: time="2025-09-12T17:33:44.649898632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-2-340685d2b8,Uid:d05c13e438a231f9a5807f8e4f1ba738,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ba33610dacf8d5b2b9087f68b3197c6534ad69a6fc9a8ab4b98e6cbfa5ce113\"" Sep 12 17:33:44.656891 containerd[1494]: time="2025-09-12T17:33:44.656844727Z" level=info msg="CreateContainer within sandbox \"3ba33610dacf8d5b2b9087f68b3197c6534ad69a6fc9a8ab4b98e6cbfa5ce113\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:33:44.659782 containerd[1494]: time="2025-09-12T17:33:44.659610556Z" level=info msg="CreateContainer within sandbox \"8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1\"" Sep 12 17:33:44.660541 containerd[1494]: time="2025-09-12T17:33:44.660517193Z" level=info msg="StartContainer for \"035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1\"" Sep 12 17:33:44.665649 containerd[1494]: time="2025-09-12T17:33:44.665576705Z" level=info msg="CreateContainer within sandbox \"2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588\"" Sep 12 17:33:44.667718 containerd[1494]: time="2025-09-12T17:33:44.667293941Z" level=info msg="StartContainer for \"5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588\"" Sep 12 17:33:44.675109 containerd[1494]: time="2025-09-12T17:33:44.675066563Z" level=info msg="CreateContainer within sandbox \"3ba33610dacf8d5b2b9087f68b3197c6534ad69a6fc9a8ab4b98e6cbfa5ce113\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78f75a63a690bdfca2b40e11ddeab6b8c944c67977163bb7f81bbc57b9ccbfb4\"" Sep 12 17:33:44.676781 containerd[1494]: time="2025-09-12T17:33:44.676664152Z" level=info msg="StartContainer for \"78f75a63a690bdfca2b40e11ddeab6b8c944c67977163bb7f81bbc57b9ccbfb4\"" Sep 12 17:33:44.691446 systemd[1]: Started cri-containerd-035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1.scope - libcontainer container 035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1. Sep 12 17:33:44.710321 systemd[1]: Started cri-containerd-78f75a63a690bdfca2b40e11ddeab6b8c944c67977163bb7f81bbc57b9ccbfb4.scope - libcontainer container 78f75a63a690bdfca2b40e11ddeab6b8c944c67977163bb7f81bbc57b9ccbfb4. Sep 12 17:33:44.715093 systemd[1]: Started cri-containerd-5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588.scope - libcontainer container 5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588. Sep 12 17:33:44.775378 containerd[1494]: time="2025-09-12T17:33:44.773744366Z" level=info msg="StartContainer for \"5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588\" returns successfully" Sep 12 17:33:44.783434 containerd[1494]: time="2025-09-12T17:33:44.783059393Z" level=info msg="StartContainer for \"78f75a63a690bdfca2b40e11ddeab6b8c944c67977163bb7f81bbc57b9ccbfb4\" returns successfully" Sep 12 17:33:44.801371 containerd[1494]: time="2025-09-12T17:33:44.801314708Z" level=info msg="StartContainer for \"035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1\" returns successfully" Sep 12 17:33:44.865333 kubelet[2186]: E0912 17:33:44.865250 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.98.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-2-340685d2b8?timeout=10s\": dial tcp 135.181.98.85:6443: connect: connection refused" interval="1.6s" Sep 12 17:33:45.003931 kubelet[2186]: E0912 17:33:45.003879 2186 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.98.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.98.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:33:45.033088 kubelet[2186]: I0912 17:33:45.032824 2186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:45.497926 kubelet[2186]: E0912 17:33:45.497822 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:45.498236 kubelet[2186]: E0912 17:33:45.498075 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:45.498853 kubelet[2186]: E0912 17:33:45.498837 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:46.502078 kubelet[2186]: E0912 17:33:46.502027 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:46.502531 kubelet[2186]: E0912 17:33:46.502510 2186 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.125214 kubelet[2186]: E0912 17:33:47.125171 2186 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-2-340685d2b8\" not found" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.215244 kubelet[2186]: E0912 17:33:47.215142 2186 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-2-340685d2b8.18649967478866ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-2-340685d2b8,UID:ci-4081-3-6-2-340685d2b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-2-340685d2b8,},FirstTimestamp:2025-09-12 17:33:43.442138861 +0000 UTC m=+0.626587231,LastTimestamp:2025-09-12 17:33:43.442138861 +0000 UTC m=+0.626587231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-2-340685d2b8,}" Sep 12 17:33:47.272181 kubelet[2186]: E0912 17:33:47.271283 2186 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-2-340685d2b8.1864996748cbf263 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-2-340685d2b8,UID:ci-4081-3-6-2-340685d2b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-2-340685d2b8,},FirstTimestamp:2025-09-12 17:33:43.463342691 +0000 UTC m=+0.647791061,LastTimestamp:2025-09-12 17:33:43.463342691 +0000 UTC m=+0.647791061,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-2-340685d2b8,}" Sep 12 17:33:47.272405 kubelet[2186]: I0912 17:33:47.272394 2186 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.272476 kubelet[2186]: E0912 17:33:47.272467 2186 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-2-340685d2b8\": node \"ci-4081-3-6-2-340685d2b8\" not found" Sep 12 17:33:47.362306 kubelet[2186]: I0912 17:33:47.362260 2186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.367964 kubelet[2186]: E0912 17:33:47.367924 2186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.367964 kubelet[2186]: I0912 17:33:47.367949 2186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.370032 kubelet[2186]: E0912 17:33:47.369413 2186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-2-340685d2b8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.370032 kubelet[2186]: I0912 17:33:47.369463 2186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.370571 kubelet[2186]: E0912 17:33:47.370530 2186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:47.435393 kubelet[2186]: I0912 17:33:47.434845 2186 apiserver.go:52] "Watching apiserver" Sep 12 17:33:47.461855 kubelet[2186]: I0912 17:33:47.461795 2186 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:33:49.274171 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-7.scope)... Sep 12 17:33:49.274190 systemd[1]: Reloading... Sep 12 17:33:49.391187 zram_generator::config[2523]: No configuration found. Sep 12 17:33:49.484179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:49.582382 systemd[1]: Reloading finished in 307 ms. Sep 12 17:33:49.618043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:49.629523 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:33:49.629922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:49.636402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:49.731972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:49.743677 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:33:49.801410 kubelet[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:49.801410 kubelet[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:33:49.801410 kubelet[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:49.801889 kubelet[2565]: I0912 17:33:49.801481 2565 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:33:49.807154 kubelet[2565]: I0912 17:33:49.807104 2565 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:33:49.807204 kubelet[2565]: I0912 17:33:49.807159 2565 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:33:49.807393 kubelet[2565]: I0912 17:33:49.807368 2565 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:33:49.808499 kubelet[2565]: I0912 17:33:49.808476 2565 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:33:49.811169 kubelet[2565]: I0912 17:33:49.810346 2565 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:33:49.820346 kubelet[2565]: E0912 17:33:49.820304 2565 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:33:49.820346 kubelet[2565]: I0912 17:33:49.820339 2565 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:33:49.826933 kubelet[2565]: I0912 17:33:49.826887 2565 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:33:49.827218 kubelet[2565]: I0912 17:33:49.827176 2565 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:33:49.827385 kubelet[2565]: I0912 17:33:49.827213 2565 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-2-340685d2b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:33:49.827385 kubelet[2565]: I0912 17:33:49.827384 2565 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:33:49.827491 kubelet[2565]: I0912 17:33:49.827393 2565 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:33:49.827491 kubelet[2565]: I0912 17:33:49.827437 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:49.827650 kubelet[2565]: I0912 17:33:49.827627 2565 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:33:49.827683 kubelet[2565]: I0912 17:33:49.827653 2565 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:33:49.832985 kubelet[2565]: I0912 17:33:49.829097 2565 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:33:49.832985 kubelet[2565]: I0912 17:33:49.829136 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:33:49.832985 kubelet[2565]: I0912 17:33:49.830720 2565 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:33:49.832985 kubelet[2565]: I0912 17:33:49.831175 2565 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:33:49.838590 kubelet[2565]: I0912 17:33:49.838565 2565 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:33:49.838690 kubelet[2565]: I0912 17:33:49.838606 2565 server.go:1289] "Started kubelet" Sep 12 17:33:49.842729 kubelet[2565]: I0912 17:33:49.842667 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:33:49.843165 kubelet[2565]: I0912 17:33:49.843155 2565 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:33:49.844321 kubelet[2565]: I0912 17:33:49.844288 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:33:49.854903 kubelet[2565]: I0912 17:33:49.854850 2565 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:33:49.855941 kubelet[2565]: I0912 17:33:49.855929 2565 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:33:49.856840 kubelet[2565]: I0912 17:33:49.856827 2565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:33:49.858676 kubelet[2565]: I0912 17:33:49.858666 2565 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:33:49.858935 kubelet[2565]: E0912 17:33:49.858922 2565 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-2-340685d2b8\" not found" Sep 12 17:33:49.860511 kubelet[2565]: I0912 17:33:49.860490 2565 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:33:49.860657 kubelet[2565]: I0912 17:33:49.860648 2565 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:33:49.862024 kubelet[2565]: I0912 17:33:49.862005 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:33:49.862813 kubelet[2565]: I0912 17:33:49.862802 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:33:49.863563 kubelet[2565]: I0912 17:33:49.863551 2565 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:33:49.863621 kubelet[2565]: I0912 17:33:49.863615 2565 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:33:49.863671 kubelet[2565]: I0912 17:33:49.863665 2565 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:33:49.863741 kubelet[2565]: E0912 17:33:49.863729 2565 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:33:49.866721 kubelet[2565]: I0912 17:33:49.866678 2565 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:33:49.869585 kubelet[2565]: E0912 17:33:49.869568 2565 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:33:49.870989 kubelet[2565]: I0912 17:33:49.870977 2565 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:33:49.872048 kubelet[2565]: I0912 17:33:49.872038 2565 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:33:49.919562 kubelet[2565]: I0912 17:33:49.919517 2565 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:33:49.919562 kubelet[2565]: I0912 17:33:49.919539 2565 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:33:49.919562 kubelet[2565]: I0912 17:33:49.919561 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:49.919736 kubelet[2565]: I0912 17:33:49.919686 2565 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:33:49.919736 kubelet[2565]: I0912 17:33:49.919695 2565 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:33:49.919736 kubelet[2565]: I0912 17:33:49.919710 2565 policy_none.go:49] "None policy: Start" Sep 12 17:33:49.919736 kubelet[2565]: I0912 17:33:49.919719 2565 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:33:49.919736 kubelet[2565]: I0912 17:33:49.919726 2565 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:33:49.919851 kubelet[2565]: I0912 17:33:49.919798 2565 state_mem.go:75] "Updated machine memory state" Sep 12 17:33:49.923443 kubelet[2565]: E0912 17:33:49.923427 2565 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:33:49.924009 kubelet[2565]: I0912 17:33:49.923654 2565 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:33:49.924009 kubelet[2565]: I0912 17:33:49.923667 2565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:33:49.924009 kubelet[2565]: I0912 17:33:49.923865 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:33:49.925212 kubelet[2565]: E0912 17:33:49.925199 2565 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:33:49.964766 kubelet[2565]: I0912 17:33:49.964729 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:49.965557 kubelet[2565]: I0912 17:33:49.965075 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:49.965751 kubelet[2565]: I0912 17:33:49.965186 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.028215 kubelet[2565]: I0912 17:33:50.028186 2565 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.041662 kubelet[2565]: I0912 17:33:50.041607 2565 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.041834 kubelet[2565]: I0912 17:33:50.041715 2565 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163334 kubelet[2565]: I0912 17:33:50.162304 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163334 kubelet[2565]: I0912 17:33:50.162335 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163334 kubelet[2565]: I0912 17:33:50.162351 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163334 kubelet[2565]: I0912 17:33:50.162369 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163334 kubelet[2565]: I0912 17:33:50.162383 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3d7c8576a9e26963b51cadc993e654b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-2-340685d2b8\" (UID: \"c3d7c8576a9e26963b51cadc993e654b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163512 kubelet[2565]: I0912 17:33:50.162395 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d05c13e438a231f9a5807f8e4f1ba738-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" (UID: \"d05c13e438a231f9a5807f8e4f1ba738\") " pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163512 kubelet[2565]: I0912 17:33:50.162407 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163512 kubelet[2565]: I0912 17:33:50.162420 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.163512 kubelet[2565]: I0912 17:33:50.162432 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd0f9f6eaaab6078f218b809689863ae-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-2-340685d2b8\" (UID: \"dd0f9f6eaaab6078f218b809689863ae\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.831526 kubelet[2565]: I0912 17:33:50.830201 2565 apiserver.go:52] "Watching apiserver" Sep 12 17:33:50.861063 kubelet[2565]: I0912 17:33:50.860991 2565 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:33:50.903897 kubelet[2565]: I0912 17:33:50.901928 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.904409 kubelet[2565]: I0912 17:33:50.904399 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.913527 kubelet[2565]: E0912 17:33:50.913081 2565 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-2-340685d2b8\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.916743 kubelet[2565]: E0912 17:33:50.916485 2565 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-2-340685d2b8\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" Sep 12 17:33:50.937575 kubelet[2565]: I0912 17:33:50.937442 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-2-340685d2b8" podStartSLOduration=1.93742769 podStartE2EDuration="1.93742769s" podCreationTimestamp="2025-09-12 17:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:50.93695093 +0000 UTC m=+1.188469775" watchObservedRunningTime="2025-09-12 17:33:50.93742769 +0000 UTC m=+1.188946515" Sep 12 17:33:50.956354 kubelet[2565]: I0912 17:33:50.956192 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-2-340685d2b8" podStartSLOduration=1.95617531 podStartE2EDuration="1.95617531s" podCreationTimestamp="2025-09-12 17:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:50.948341938 +0000 UTC m=+1.199860763" watchObservedRunningTime="2025-09-12 17:33:50.95617531 +0000 UTC m=+1.207694134" Sep 12 17:33:50.956354 kubelet[2565]: I0912 17:33:50.956274 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-2-340685d2b8" podStartSLOduration=1.956270184 podStartE2EDuration="1.956270184s" podCreationTimestamp="2025-09-12 17:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:50.956155615 +0000 UTC m=+1.207674439" watchObservedRunningTime="2025-09-12 17:33:50.956270184 +0000 UTC m=+1.207789009" Sep 12 17:33:54.075074 kubelet[2565]: I0912 17:33:54.074954 2565 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:33:54.076096 containerd[1494]: time="2025-09-12T17:33:54.076046359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:33:54.076670 kubelet[2565]: I0912 17:33:54.076472 2565 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:33:55.050286 systemd[1]: Created slice kubepods-besteffort-poda492a9c9_6a04_4b4f_9af3_791f96f28b86.slice - libcontainer container kubepods-besteffort-poda492a9c9_6a04_4b4f_9af3_791f96f28b86.slice. Sep 12 17:33:55.094466 kubelet[2565]: I0912 17:33:55.094427 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a492a9c9-6a04-4b4f-9af3-791f96f28b86-lib-modules\") pod \"kube-proxy-486zs\" (UID: \"a492a9c9-6a04-4b4f-9af3-791f96f28b86\") " pod="kube-system/kube-proxy-486zs" Sep 12 17:33:55.094466 kubelet[2565]: I0912 17:33:55.094468 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbwf\" (UniqueName: \"kubernetes.io/projected/a492a9c9-6a04-4b4f-9af3-791f96f28b86-kube-api-access-5nbwf\") pod \"kube-proxy-486zs\" (UID: \"a492a9c9-6a04-4b4f-9af3-791f96f28b86\") " pod="kube-system/kube-proxy-486zs" Sep 12 17:33:55.094881 kubelet[2565]: I0912 17:33:55.094492 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a492a9c9-6a04-4b4f-9af3-791f96f28b86-kube-proxy\") pod \"kube-proxy-486zs\" (UID: \"a492a9c9-6a04-4b4f-9af3-791f96f28b86\") " pod="kube-system/kube-proxy-486zs" Sep 12 17:33:55.094881 kubelet[2565]: I0912 17:33:55.094504 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a492a9c9-6a04-4b4f-9af3-791f96f28b86-xtables-lock\") pod \"kube-proxy-486zs\" (UID: \"a492a9c9-6a04-4b4f-9af3-791f96f28b86\") " pod="kube-system/kube-proxy-486zs" Sep 12 17:33:55.227329 systemd[1]: Created slice kubepods-besteffort-podfff1ec1e_fb7f_479a_98be_b52f48fb2c08.slice - libcontainer container kubepods-besteffort-podfff1ec1e_fb7f_479a_98be_b52f48fb2c08.slice. Sep 12 17:33:55.295652 kubelet[2565]: I0912 17:33:55.295560 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fff1ec1e-fb7f-479a-98be-b52f48fb2c08-var-lib-calico\") pod \"tigera-operator-755d956888-x44hs\" (UID: \"fff1ec1e-fb7f-479a-98be-b52f48fb2c08\") " pod="tigera-operator/tigera-operator-755d956888-x44hs" Sep 12 17:33:55.295652 kubelet[2565]: I0912 17:33:55.295614 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k82x\" (UniqueName: \"kubernetes.io/projected/fff1ec1e-fb7f-479a-98be-b52f48fb2c08-kube-api-access-9k82x\") pod \"tigera-operator-755d956888-x44hs\" (UID: \"fff1ec1e-fb7f-479a-98be-b52f48fb2c08\") " pod="tigera-operator/tigera-operator-755d956888-x44hs" Sep 12 17:33:55.360762 containerd[1494]: time="2025-09-12T17:33:55.360650107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-486zs,Uid:a492a9c9-6a04-4b4f-9af3-791f96f28b86,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:55.387057 containerd[1494]: time="2025-09-12T17:33:55.386948986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:55.393341 containerd[1494]: time="2025-09-12T17:33:55.389167162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:55.393341 containerd[1494]: time="2025-09-12T17:33:55.389200889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:55.393341 containerd[1494]: time="2025-09-12T17:33:55.389335246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:55.423377 systemd[1]: Started cri-containerd-dbffe865df8cf99390e399ec8688c3089556d2a0b6d385d35041f579ef6d2670.scope - libcontainer container dbffe865df8cf99390e399ec8688c3089556d2a0b6d385d35041f579ef6d2670. Sep 12 17:33:55.446398 containerd[1494]: time="2025-09-12T17:33:55.446323885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-486zs,Uid:a492a9c9-6a04-4b4f-9af3-791f96f28b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbffe865df8cf99390e399ec8688c3089556d2a0b6d385d35041f579ef6d2670\"" Sep 12 17:33:55.452360 containerd[1494]: time="2025-09-12T17:33:55.452206637Z" level=info msg="CreateContainer within sandbox \"dbffe865df8cf99390e399ec8688c3089556d2a0b6d385d35041f579ef6d2670\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:33:55.467391 containerd[1494]: time="2025-09-12T17:33:55.467298853Z" level=info msg="CreateContainer within sandbox \"dbffe865df8cf99390e399ec8688c3089556d2a0b6d385d35041f579ef6d2670\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5a72f78834ec1e50d303d90dbe0ba582fda44c3e084e5dd8fc99063cd3e4933\"" Sep 12 17:33:55.468261 containerd[1494]: time="2025-09-12T17:33:55.468235013Z" level=info msg="StartContainer for \"b5a72f78834ec1e50d303d90dbe0ba582fda44c3e084e5dd8fc99063cd3e4933\"" Sep 12 17:33:55.496452 systemd[1]: Started cri-containerd-b5a72f78834ec1e50d303d90dbe0ba582fda44c3e084e5dd8fc99063cd3e4933.scope - libcontainer container b5a72f78834ec1e50d303d90dbe0ba582fda44c3e084e5dd8fc99063cd3e4933. Sep 12 17:33:55.521928 containerd[1494]: time="2025-09-12T17:33:55.521480630Z" level=info msg="StartContainer for \"b5a72f78834ec1e50d303d90dbe0ba582fda44c3e084e5dd8fc99063cd3e4933\" returns successfully" Sep 12 17:33:55.531315 containerd[1494]: time="2025-09-12T17:33:55.531263116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x44hs,Uid:fff1ec1e-fb7f-479a-98be-b52f48fb2c08,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:33:55.555979 containerd[1494]: time="2025-09-12T17:33:55.555863317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:55.556570 containerd[1494]: time="2025-09-12T17:33:55.556097281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:55.556570 containerd[1494]: time="2025-09-12T17:33:55.556370690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:55.557694 containerd[1494]: time="2025-09-12T17:33:55.556544140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:55.576109 systemd[1]: Started cri-containerd-e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657.scope - libcontainer container e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657. Sep 12 17:33:55.615579 containerd[1494]: time="2025-09-12T17:33:55.614736239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x44hs,Uid:fff1ec1e-fb7f-479a-98be-b52f48fb2c08,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657\"" Sep 12 17:33:55.618542 containerd[1494]: time="2025-09-12T17:33:55.618279164Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:33:57.920103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257400668.mount: Deactivated successfully. Sep 12 17:33:58.355682 update_engine[1476]: I20250912 17:33:58.355576 1476 update_attempter.cc:509] Updating boot flags... Sep 12 17:33:58.390287 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2880) Sep 12 17:33:58.447184 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2881) Sep 12 17:33:59.623475 containerd[1494]: time="2025-09-12T17:33:59.623413126Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:59.624764 containerd[1494]: time="2025-09-12T17:33:59.624598229Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:33:59.627168 containerd[1494]: time="2025-09-12T17:33:59.625514489Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:59.628205 containerd[1494]: time="2025-09-12T17:33:59.628168745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:59.629455 containerd[1494]: time="2025-09-12T17:33:59.629151028Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.010805968s" Sep 12 17:33:59.629455 containerd[1494]: time="2025-09-12T17:33:59.629185593Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:33:59.633584 containerd[1494]: time="2025-09-12T17:33:59.633540664Z" level=info msg="CreateContainer within sandbox \"e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:33:59.644340 containerd[1494]: time="2025-09-12T17:33:59.644048160Z" level=info msg="CreateContainer within sandbox \"e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d\"" Sep 12 17:33:59.646298 containerd[1494]: time="2025-09-12T17:33:59.644984761Z" level=info msg="StartContainer for \"11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d\"" Sep 12 17:33:59.684294 systemd[1]: Started cri-containerd-11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d.scope - libcontainer container 11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d. Sep 12 17:33:59.708292 containerd[1494]: time="2025-09-12T17:33:59.708077233Z" level=info msg="StartContainer for \"11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d\" returns successfully" Sep 12 17:33:59.931752 kubelet[2565]: I0912 17:33:59.931601 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-486zs" podStartSLOduration=4.931583907 podStartE2EDuration="4.931583907s" podCreationTimestamp="2025-09-12 17:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:55.925365365 +0000 UTC m=+6.176884190" watchObservedRunningTime="2025-09-12 17:33:59.931583907 +0000 UTC m=+10.183102732" Sep 12 17:34:01.693743 kubelet[2565]: I0912 17:34:01.693683 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-x44hs" podStartSLOduration=2.681491552 podStartE2EDuration="6.693663831s" podCreationTimestamp="2025-09-12 17:33:55 +0000 UTC" firstStartedPulling="2025-09-12 17:33:55.617681741 +0000 UTC m=+5.869200567" lastFinishedPulling="2025-09-12 17:33:59.629854022 +0000 UTC m=+9.881372846" observedRunningTime="2025-09-12 17:33:59.934460225 +0000 UTC m=+10.185979050" watchObservedRunningTime="2025-09-12 17:34:01.693663831 +0000 UTC m=+11.945182655" Sep 12 17:34:05.755505 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:05.915737 sshd[1689]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:05.918775 systemd[1]: sshd@7-135.181.98.85:22-147.75.109.163:42908.service: Deactivated successfully. Sep 12 17:34:05.922799 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:34:05.924212 systemd[1]: session-7.scope: Consumed 5.581s CPU time, 146.1M memory peak, 0B memory swap peak. Sep 12 17:34:05.926182 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:34:05.928537 systemd-logind[1475]: Removed session 7. Sep 12 17:34:09.372410 systemd[1]: Created slice kubepods-besteffort-pod55de6d6b_4104_4f93_8a90_b96cddc97c11.slice - libcontainer container kubepods-besteffort-pod55de6d6b_4104_4f93_8a90_b96cddc97c11.slice. Sep 12 17:34:09.390252 kubelet[2565]: I0912 17:34:09.390217 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55de6d6b-4104-4f93-8a90-b96cddc97c11-tigera-ca-bundle\") pod \"calico-typha-79666fcc9f-2ln26\" (UID: \"55de6d6b-4104-4f93-8a90-b96cddc97c11\") " pod="calico-system/calico-typha-79666fcc9f-2ln26" Sep 12 17:34:09.390819 kubelet[2565]: I0912 17:34:09.390770 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvxj\" (UniqueName: \"kubernetes.io/projected/55de6d6b-4104-4f93-8a90-b96cddc97c11-kube-api-access-qzvxj\") pod \"calico-typha-79666fcc9f-2ln26\" (UID: \"55de6d6b-4104-4f93-8a90-b96cddc97c11\") " pod="calico-system/calico-typha-79666fcc9f-2ln26" Sep 12 17:34:09.390910 kubelet[2565]: I0912 17:34:09.390901 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55de6d6b-4104-4f93-8a90-b96cddc97c11-typha-certs\") pod \"calico-typha-79666fcc9f-2ln26\" (UID: \"55de6d6b-4104-4f93-8a90-b96cddc97c11\") " pod="calico-system/calico-typha-79666fcc9f-2ln26" Sep 12 17:34:09.679298 containerd[1494]: time="2025-09-12T17:34:09.678933839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79666fcc9f-2ln26,Uid:55de6d6b-4104-4f93-8a90-b96cddc97c11,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:09.705018 containerd[1494]: time="2025-09-12T17:34:09.704756190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:09.705018 containerd[1494]: time="2025-09-12T17:34:09.704854479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:09.705018 containerd[1494]: time="2025-09-12T17:34:09.704899353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:09.705018 containerd[1494]: time="2025-09-12T17:34:09.705008720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:09.748153 systemd[1]: Created slice kubepods-besteffort-podcb5ebed0_2df8_4514_8cc2_4ce4e9946666.slice - libcontainer container kubepods-besteffort-podcb5ebed0_2df8_4514_8cc2_4ce4e9946666.slice. Sep 12 17:34:09.779301 systemd[1]: Started cri-containerd-58a239a7df6974f816c380a7e77fec58000dbe7375d71acb99b62d7f8cae337a.scope - libcontainer container 58a239a7df6974f816c380a7e77fec58000dbe7375d71acb99b62d7f8cae337a. Sep 12 17:34:09.793148 kubelet[2565]: I0912 17:34:09.792897 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-cni-log-dir\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793148 kubelet[2565]: I0912 17:34:09.792980 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-cni-net-dir\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793148 kubelet[2565]: I0912 17:34:09.792997 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-flexvol-driver-host\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793148 kubelet[2565]: I0912 17:34:09.793064 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-lib-modules\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793381 kubelet[2565]: I0912 17:34:09.793082 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-var-lib-calico\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793381 kubelet[2565]: I0912 17:34:09.793220 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthfl\" (UniqueName: \"kubernetes.io/projected/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-kube-api-access-bthfl\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793381 kubelet[2565]: I0912 17:34:09.793238 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-var-run-calico\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793381 kubelet[2565]: I0912 17:34:09.793256 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-cni-bin-dir\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793381 kubelet[2565]: I0912 17:34:09.793317 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-node-certs\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793501 kubelet[2565]: I0912 17:34:09.793338 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-policysync\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793501 kubelet[2565]: I0912 17:34:09.793422 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-xtables-lock\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.793501 kubelet[2565]: I0912 17:34:09.793477 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb5ebed0-2df8-4514-8cc2-4ce4e9946666-tigera-ca-bundle\") pod \"calico-node-gvbw8\" (UID: \"cb5ebed0-2df8-4514-8cc2-4ce4e9946666\") " pod="calico-system/calico-node-gvbw8" Sep 12 17:34:09.823258 containerd[1494]: time="2025-09-12T17:34:09.823194688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79666fcc9f-2ln26,Uid:55de6d6b-4104-4f93-8a90-b96cddc97c11,Namespace:calico-system,Attempt:0,} returns sandbox id \"58a239a7df6974f816c380a7e77fec58000dbe7375d71acb99b62d7f8cae337a\"" Sep 12 17:34:09.826448 containerd[1494]: time="2025-09-12T17:34:09.826420716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:34:09.899910 kubelet[2565]: E0912 17:34:09.898164 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.899910 kubelet[2565]: W0912 17:34:09.898189 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.902862 kubelet[2565]: E0912 17:34:09.902097 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:09.902862 kubelet[2565]: E0912 17:34:09.902786 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.902972 kubelet[2565]: W0912 17:34:09.902913 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.902972 kubelet[2565]: E0912 17:34:09.902938 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:09.903415 kubelet[2565]: E0912 17:34:09.903381 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.903415 kubelet[2565]: W0912 17:34:09.903394 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.903415 kubelet[2565]: E0912 17:34:09.903412 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:09.906145 kubelet[2565]: E0912 17:34:09.904868 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.906145 kubelet[2565]: W0912 17:34:09.904898 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.906145 kubelet[2565]: E0912 17:34:09.904907 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:09.906145 kubelet[2565]: E0912 17:34:09.905105 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.906145 kubelet[2565]: W0912 17:34:09.905111 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.906145 kubelet[2565]: E0912 17:34:09.905142 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:09.912279 kubelet[2565]: E0912 17:34:09.912245 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:09.912279 kubelet[2565]: W0912 17:34:09.912272 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:09.912441 kubelet[2565]: E0912 17:34:09.912292 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.031555 kubelet[2565]: E0912 17:34:10.031422 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:10.056877 containerd[1494]: time="2025-09-12T17:34:10.056832665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gvbw8,Uid:cb5ebed0-2df8-4514-8cc2-4ce4e9946666,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:10.080422 containerd[1494]: time="2025-09-12T17:34:10.080312267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:10.080543 containerd[1494]: time="2025-09-12T17:34:10.080437948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:10.080543 containerd[1494]: time="2025-09-12T17:34:10.080460421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:10.080635 containerd[1494]: time="2025-09-12T17:34:10.080592366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:10.091672 kubelet[2565]: E0912 17:34:10.091563 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.091672 kubelet[2565]: W0912 17:34:10.091590 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.091672 kubelet[2565]: E0912 17:34:10.091628 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.092820 kubelet[2565]: E0912 17:34:10.092757 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.092820 kubelet[2565]: W0912 17:34:10.092770 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.092820 kubelet[2565]: E0912 17:34:10.092781 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.093424 kubelet[2565]: E0912 17:34:10.093153 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.093424 kubelet[2565]: W0912 17:34:10.093161 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.093424 kubelet[2565]: E0912 17:34:10.093169 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.093825 kubelet[2565]: E0912 17:34:10.093696 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.093825 kubelet[2565]: W0912 17:34:10.093707 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.093825 kubelet[2565]: E0912 17:34:10.093716 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.094156 kubelet[2565]: E0912 17:34:10.094079 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.094156 kubelet[2565]: W0912 17:34:10.094087 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.094156 kubelet[2565]: E0912 17:34:10.094096 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.094732 kubelet[2565]: E0912 17:34:10.094392 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.094732 kubelet[2565]: W0912 17:34:10.094403 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.094732 kubelet[2565]: E0912 17:34:10.094412 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.095378 kubelet[2565]: E0912 17:34:10.094920 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.095378 kubelet[2565]: W0912 17:34:10.094928 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.095378 kubelet[2565]: E0912 17:34:10.094937 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.095655 systemd[1]: Started cri-containerd-a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1.scope - libcontainer container a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1. Sep 12 17:34:10.096167 kubelet[2565]: E0912 17:34:10.095919 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.096167 kubelet[2565]: W0912 17:34:10.095936 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.096167 kubelet[2565]: E0912 17:34:10.095954 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.097281 kubelet[2565]: E0912 17:34:10.097257 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.097281 kubelet[2565]: W0912 17:34:10.097275 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.097803 kubelet[2565]: E0912 17:34:10.097287 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.098299 kubelet[2565]: E0912 17:34:10.098272 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.098353 kubelet[2565]: W0912 17:34:10.098293 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.098353 kubelet[2565]: E0912 17:34:10.098335 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.099018 kubelet[2565]: E0912 17:34:10.098858 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.099018 kubelet[2565]: W0912 17:34:10.098872 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.099018 kubelet[2565]: E0912 17:34:10.098882 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.100035 kubelet[2565]: E0912 17:34:10.100013 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.100035 kubelet[2565]: W0912 17:34:10.100031 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.100109 kubelet[2565]: E0912 17:34:10.100043 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.101382 kubelet[2565]: E0912 17:34:10.101355 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.101382 kubelet[2565]: W0912 17:34:10.101372 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.101886 kubelet[2565]: E0912 17:34:10.101396 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.101886 kubelet[2565]: E0912 17:34:10.101644 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.101886 kubelet[2565]: W0912 17:34:10.101652 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.101886 kubelet[2565]: E0912 17:34:10.101660 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.101886 kubelet[2565]: E0912 17:34:10.101885 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.102039 kubelet[2565]: W0912 17:34:10.101895 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.102039 kubelet[2565]: E0912 17:34:10.101905 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.102177 kubelet[2565]: E0912 17:34:10.102055 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.102177 kubelet[2565]: W0912 17:34:10.102062 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.102177 kubelet[2565]: E0912 17:34:10.102070 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.102270 kubelet[2565]: E0912 17:34:10.102249 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.102270 kubelet[2565]: W0912 17:34:10.102257 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.102338 kubelet[2565]: E0912 17:34:10.102264 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102415 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.104052 kubelet[2565]: W0912 17:34:10.102428 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102456 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102605 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.104052 kubelet[2565]: W0912 17:34:10.102612 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102619 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102760 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.104052 kubelet[2565]: W0912 17:34:10.102767 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.102774 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104052 kubelet[2565]: E0912 17:34:10.103000 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.104941 kubelet[2565]: W0912 17:34:10.103009 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.104941 kubelet[2565]: E0912 17:34:10.103017 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104941 kubelet[2565]: I0912 17:34:10.103040 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6-kubelet-dir\") pod \"csi-node-driver-ktgfz\" (UID: \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\") " pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:10.104941 kubelet[2565]: E0912 17:34:10.103262 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.104941 kubelet[2565]: W0912 17:34:10.103270 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.104941 kubelet[2565]: E0912 17:34:10.103278 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.104941 kubelet[2565]: I0912 17:34:10.103324 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6-socket-dir\") pod \"csi-node-driver-ktgfz\" (UID: \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\") " pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:10.104941 kubelet[2565]: E0912 17:34:10.103553 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105240 kubelet[2565]: W0912 17:34:10.103561 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105240 kubelet[2565]: E0912 17:34:10.103569 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105240 kubelet[2565]: I0912 17:34:10.103582 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6-registration-dir\") pod \"csi-node-driver-ktgfz\" (UID: \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\") " pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:10.105240 kubelet[2565]: E0912 17:34:10.103765 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105240 kubelet[2565]: W0912 17:34:10.103773 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105240 kubelet[2565]: E0912 17:34:10.103781 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105240 kubelet[2565]: E0912 17:34:10.103924 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105240 kubelet[2565]: W0912 17:34:10.103930 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105240 kubelet[2565]: E0912 17:34:10.103938 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.104516 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105507 kubelet[2565]: W0912 17:34:10.104524 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.104533 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.104700 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105507 kubelet[2565]: W0912 17:34:10.104710 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.104720 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.105024 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105507 kubelet[2565]: W0912 17:34:10.105032 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.105039 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.105507 kubelet[2565]: E0912 17:34:10.105207 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.105780 kubelet[2565]: W0912 17:34:10.105228 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.105780 kubelet[2565]: E0912 17:34:10.105238 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.126158 containerd[1494]: time="2025-09-12T17:34:10.126068684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gvbw8,Uid:cb5ebed0-2df8-4514-8cc2-4ce4e9946666,Namespace:calico-system,Attempt:0,} returns sandbox id \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\"" Sep 12 17:34:10.205331 kubelet[2565]: E0912 17:34:10.205087 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.205331 kubelet[2565]: W0912 17:34:10.205235 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.205331 kubelet[2565]: E0912 17:34:10.205263 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.207879 kubelet[2565]: E0912 17:34:10.207487 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.207879 kubelet[2565]: W0912 17:34:10.207509 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.207879 kubelet[2565]: E0912 17:34:10.207522 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.207879 kubelet[2565]: I0912 17:34:10.207555 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6-varrun\") pod \"csi-node-driver-ktgfz\" (UID: \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\") " pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:10.208669 kubelet[2565]: E0912 17:34:10.208110 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.208669 kubelet[2565]: W0912 17:34:10.208270 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.208669 kubelet[2565]: E0912 17:34:10.208283 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.209662 kubelet[2565]: E0912 17:34:10.209545 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.209662 kubelet[2565]: W0912 17:34:10.209555 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.209881 kubelet[2565]: E0912 17:34:10.209564 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.211879 kubelet[2565]: E0912 17:34:10.211734 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.211879 kubelet[2565]: W0912 17:34:10.211767 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.211879 kubelet[2565]: E0912 17:34:10.211793 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.213239 kubelet[2565]: E0912 17:34:10.212912 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.213239 kubelet[2565]: W0912 17:34:10.212925 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.213239 kubelet[2565]: E0912 17:34:10.212938 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.214319 kubelet[2565]: I0912 17:34:10.214068 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rw5\" (UniqueName: \"kubernetes.io/projected/6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6-kube-api-access-92rw5\") pod \"csi-node-driver-ktgfz\" (UID: \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\") " pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:10.214319 kubelet[2565]: E0912 17:34:10.214176 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.214319 kubelet[2565]: W0912 17:34:10.214183 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.214319 kubelet[2565]: E0912 17:34:10.214192 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.215184 kubelet[2565]: E0912 17:34:10.215034 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.215184 kubelet[2565]: W0912 17:34:10.215045 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.215184 kubelet[2565]: E0912 17:34:10.215054 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.215938 kubelet[2565]: E0912 17:34:10.215732 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.215938 kubelet[2565]: W0912 17:34:10.215742 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.215938 kubelet[2565]: E0912 17:34:10.215751 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.216610 kubelet[2565]: E0912 17:34:10.216431 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.216610 kubelet[2565]: W0912 17:34:10.216442 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.216610 kubelet[2565]: E0912 17:34:10.216451 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.217247 kubelet[2565]: E0912 17:34:10.217035 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.217247 kubelet[2565]: W0912 17:34:10.217043 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.217247 kubelet[2565]: E0912 17:34:10.217052 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.217739 kubelet[2565]: E0912 17:34:10.217574 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.217739 kubelet[2565]: W0912 17:34:10.217583 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.217739 kubelet[2565]: E0912 17:34:10.217591 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.218050 kubelet[2565]: E0912 17:34:10.218042 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.218150 kubelet[2565]: W0912 17:34:10.218111 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.218150 kubelet[2565]: E0912 17:34:10.218138 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.218700 kubelet[2565]: E0912 17:34:10.218539 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.218700 kubelet[2565]: W0912 17:34:10.218547 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.218700 kubelet[2565]: E0912 17:34:10.218555 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.218917 kubelet[2565]: E0912 17:34:10.218826 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.218917 kubelet[2565]: W0912 17:34:10.218834 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.218917 kubelet[2565]: E0912 17:34:10.218841 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.219229 kubelet[2565]: E0912 17:34:10.219176 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.219229 kubelet[2565]: W0912 17:34:10.219185 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.219229 kubelet[2565]: E0912 17:34:10.219193 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.219569 kubelet[2565]: E0912 17:34:10.219459 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.219569 kubelet[2565]: W0912 17:34:10.219467 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.219569 kubelet[2565]: E0912 17:34:10.219484 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.220381 kubelet[2565]: E0912 17:34:10.220279 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.220381 kubelet[2565]: W0912 17:34:10.220291 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.220381 kubelet[2565]: E0912 17:34:10.220302 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.220520 kubelet[2565]: E0912 17:34:10.220513 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.220619 kubelet[2565]: W0912 17:34:10.220562 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.220619 kubelet[2565]: E0912 17:34:10.220571 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.220923 kubelet[2565]: E0912 17:34:10.220853 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.220923 kubelet[2565]: W0912 17:34:10.220861 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.220923 kubelet[2565]: E0912 17:34:10.220869 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.221194 kubelet[2565]: E0912 17:34:10.221182 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.221382 kubelet[2565]: W0912 17:34:10.221341 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.221382 kubelet[2565]: E0912 17:34:10.221353 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.318950 kubelet[2565]: E0912 17:34:10.318727 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.318950 kubelet[2565]: W0912 17:34:10.318746 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.318950 kubelet[2565]: E0912 17:34:10.318765 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.320983 kubelet[2565]: E0912 17:34:10.320943 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.320983 kubelet[2565]: W0912 17:34:10.320980 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.320983 kubelet[2565]: E0912 17:34:10.320991 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.321317 kubelet[2565]: E0912 17:34:10.321259 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.321317 kubelet[2565]: W0912 17:34:10.321269 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.321317 kubelet[2565]: E0912 17:34:10.321280 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.321481 kubelet[2565]: E0912 17:34:10.321457 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.321481 kubelet[2565]: W0912 17:34:10.321470 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.321481 kubelet[2565]: E0912 17:34:10.321479 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.321766 kubelet[2565]: E0912 17:34:10.321743 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.322113 kubelet[2565]: W0912 17:34:10.321839 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.322113 kubelet[2565]: E0912 17:34:10.321872 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.322440 kubelet[2565]: E0912 17:34:10.322428 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.322518 kubelet[2565]: W0912 17:34:10.322505 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.322618 kubelet[2565]: E0912 17:34:10.322604 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.323994 kubelet[2565]: E0912 17:34:10.323884 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.323994 kubelet[2565]: W0912 17:34:10.323901 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.323994 kubelet[2565]: E0912 17:34:10.323914 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.324155 kubelet[2565]: E0912 17:34:10.324104 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.324155 kubelet[2565]: W0912 17:34:10.324114 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.324155 kubelet[2565]: E0912 17:34:10.324141 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.324960 kubelet[2565]: E0912 17:34:10.324263 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.324960 kubelet[2565]: W0912 17:34:10.324271 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.324960 kubelet[2565]: E0912 17:34:10.324278 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.325261 kubelet[2565]: E0912 17:34:10.325217 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.325261 kubelet[2565]: W0912 17:34:10.325240 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.325261 kubelet[2565]: E0912 17:34:10.325249 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:10.331762 kubelet[2565]: E0912 17:34:10.331727 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:10.331762 kubelet[2565]: W0912 17:34:10.331753 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:10.331867 kubelet[2565]: E0912 17:34:10.331776 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:11.474380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699766027.mount: Deactivated successfully. Sep 12 17:34:11.866056 kubelet[2565]: E0912 17:34:11.865996 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:12.014522 containerd[1494]: time="2025-09-12T17:34:12.014464903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.015697 containerd[1494]: time="2025-09-12T17:34:12.015327494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:34:12.016235 containerd[1494]: time="2025-09-12T17:34:12.016209789Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.019167 containerd[1494]: time="2025-09-12T17:34:12.018500405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.019167 containerd[1494]: time="2025-09-12T17:34:12.018995562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.192541226s" Sep 12 17:34:12.019167 containerd[1494]: time="2025-09-12T17:34:12.019020387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:34:12.022183 containerd[1494]: time="2025-09-12T17:34:12.021173502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:34:12.040089 containerd[1494]: time="2025-09-12T17:34:12.039796171Z" level=info msg="CreateContainer within sandbox \"58a239a7df6974f816c380a7e77fec58000dbe7375d71acb99b62d7f8cae337a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:34:12.070216 containerd[1494]: time="2025-09-12T17:34:12.070165070Z" level=info msg="CreateContainer within sandbox \"58a239a7df6974f816c380a7e77fec58000dbe7375d71acb99b62d7f8cae337a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f51dca7dd6ad4c0c9c0d95529ae3c35d027d463b7ba38d6ee20b85233113c7e8\"" Sep 12 17:34:12.072052 containerd[1494]: time="2025-09-12T17:34:12.071188437Z" level=info msg="StartContainer for \"f51dca7dd6ad4c0c9c0d95529ae3c35d027d463b7ba38d6ee20b85233113c7e8\"" Sep 12 17:34:12.126277 systemd[1]: Started cri-containerd-f51dca7dd6ad4c0c9c0d95529ae3c35d027d463b7ba38d6ee20b85233113c7e8.scope - libcontainer container f51dca7dd6ad4c0c9c0d95529ae3c35d027d463b7ba38d6ee20b85233113c7e8. Sep 12 17:34:12.173522 containerd[1494]: time="2025-09-12T17:34:12.173451123Z" level=info msg="StartContainer for \"f51dca7dd6ad4c0c9c0d95529ae3c35d027d463b7ba38d6ee20b85233113c7e8\" returns successfully" Sep 12 17:34:13.024149 kubelet[2565]: E0912 17:34:13.024052 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.024149 kubelet[2565]: W0912 17:34:13.024074 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.025342 kubelet[2565]: E0912 17:34:13.024713 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.025342 kubelet[2565]: E0912 17:34:13.025240 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.025342 kubelet[2565]: W0912 17:34:13.025250 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.025342 kubelet[2565]: E0912 17:34:13.025263 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.026149 kubelet[2565]: E0912 17:34:13.025570 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.026149 kubelet[2565]: W0912 17:34:13.025580 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.026149 kubelet[2565]: E0912 17:34:13.025590 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.026149 kubelet[2565]: E0912 17:34:13.025846 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.026149 kubelet[2565]: W0912 17:34:13.025981 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.026149 kubelet[2565]: E0912 17:34:13.025993 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.026885 kubelet[2565]: E0912 17:34:13.026502 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.026885 kubelet[2565]: W0912 17:34:13.026511 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.026885 kubelet[2565]: E0912 17:34:13.026521 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.027282 kubelet[2565]: E0912 17:34:13.027098 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.027282 kubelet[2565]: W0912 17:34:13.027109 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.027282 kubelet[2565]: E0912 17:34:13.027143 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.027519 kubelet[2565]: E0912 17:34:13.027427 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.027519 kubelet[2565]: W0912 17:34:13.027437 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.027519 kubelet[2565]: E0912 17:34:13.027447 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.029520 kubelet[2565]: E0912 17:34:13.029479 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.029832 kubelet[2565]: W0912 17:34:13.029520 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.029832 kubelet[2565]: E0912 17:34:13.029552 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.030169 kubelet[2565]: E0912 17:34:13.030145 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.030169 kubelet[2565]: W0912 17:34:13.030166 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.030258 kubelet[2565]: E0912 17:34:13.030179 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.030501 kubelet[2565]: E0912 17:34:13.030478 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.030501 kubelet[2565]: W0912 17:34:13.030494 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.030728 kubelet[2565]: E0912 17:34:13.030504 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.030762 kubelet[2565]: E0912 17:34:13.030747 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.030762 kubelet[2565]: W0912 17:34:13.030758 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.030822 kubelet[2565]: E0912 17:34:13.030770 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.031024 kubelet[2565]: E0912 17:34:13.030997 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.031024 kubelet[2565]: W0912 17:34:13.031018 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.032089 kubelet[2565]: E0912 17:34:13.031030 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.032089 kubelet[2565]: E0912 17:34:13.032104 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.032481 kubelet[2565]: W0912 17:34:13.032114 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.032481 kubelet[2565]: E0912 17:34:13.032142 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.032481 kubelet[2565]: E0912 17:34:13.032292 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.032481 kubelet[2565]: W0912 17:34:13.032300 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.032481 kubelet[2565]: E0912 17:34:13.032309 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.033451 kubelet[2565]: E0912 17:34:13.032948 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.033451 kubelet[2565]: W0912 17:34:13.032964 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.033451 kubelet[2565]: E0912 17:34:13.032975 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.036793 kubelet[2565]: I0912 17:34:13.033946 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79666fcc9f-2ln26" podStartSLOduration=1.839998165 podStartE2EDuration="4.03393223s" podCreationTimestamp="2025-09-12 17:34:09 +0000 UTC" firstStartedPulling="2025-09-12 17:34:09.825941315 +0000 UTC m=+20.077460140" lastFinishedPulling="2025-09-12 17:34:12.019875381 +0000 UTC m=+22.271394205" observedRunningTime="2025-09-12 17:34:13.03381705 +0000 UTC m=+23.285335885" watchObservedRunningTime="2025-09-12 17:34:13.03393223 +0000 UTC m=+23.285451055" Sep 12 17:34:13.038048 kubelet[2565]: E0912 17:34:13.037997 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.038372 kubelet[2565]: W0912 17:34:13.038231 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.038501 kubelet[2565]: E0912 17:34:13.038447 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.039379 kubelet[2565]: E0912 17:34:13.039176 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.039379 kubelet[2565]: W0912 17:34:13.039199 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.040066 kubelet[2565]: E0912 17:34:13.039535 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.040368 kubelet[2565]: E0912 17:34:13.040246 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.040368 kubelet[2565]: W0912 17:34:13.040281 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.040368 kubelet[2565]: E0912 17:34:13.040298 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.041218 kubelet[2565]: E0912 17:34:13.041187 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.041218 kubelet[2565]: W0912 17:34:13.041206 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.041218 kubelet[2565]: E0912 17:34:13.041217 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.041519 kubelet[2565]: E0912 17:34:13.041434 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.041519 kubelet[2565]: W0912 17:34:13.041442 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.041519 kubelet[2565]: E0912 17:34:13.041451 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.041711 kubelet[2565]: E0912 17:34:13.041678 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.041711 kubelet[2565]: W0912 17:34:13.041687 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.041711 kubelet[2565]: E0912 17:34:13.041696 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.042250 kubelet[2565]: E0912 17:34:13.042226 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.042250 kubelet[2565]: W0912 17:34:13.042236 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.042250 kubelet[2565]: E0912 17:34:13.042247 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.042644 kubelet[2565]: E0912 17:34:13.042603 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.043167 kubelet[2565]: W0912 17:34:13.043106 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.043167 kubelet[2565]: E0912 17:34:13.043144 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.043808 kubelet[2565]: E0912 17:34:13.043755 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.043808 kubelet[2565]: W0912 17:34:13.043778 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.043808 kubelet[2565]: E0912 17:34:13.043789 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.044219 kubelet[2565]: E0912 17:34:13.044199 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.044219 kubelet[2565]: W0912 17:34:13.044213 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.044309 kubelet[2565]: E0912 17:34:13.044224 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.044662 kubelet[2565]: E0912 17:34:13.044548 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.044662 kubelet[2565]: W0912 17:34:13.044585 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.044662 kubelet[2565]: E0912 17:34:13.044606 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.045081 kubelet[2565]: E0912 17:34:13.045069 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.045244 kubelet[2565]: W0912 17:34:13.045142 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.045244 kubelet[2565]: E0912 17:34:13.045156 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.045740 kubelet[2565]: E0912 17:34:13.045677 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.045740 kubelet[2565]: W0912 17:34:13.045689 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.045740 kubelet[2565]: E0912 17:34:13.045698 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.046264 kubelet[2565]: E0912 17:34:13.046230 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.046264 kubelet[2565]: W0912 17:34:13.046245 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.046264 kubelet[2565]: E0912 17:34:13.046255 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.046536 kubelet[2565]: E0912 17:34:13.046446 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.046536 kubelet[2565]: W0912 17:34:13.046453 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.046536 kubelet[2565]: E0912 17:34:13.046461 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.046985 kubelet[2565]: E0912 17:34:13.046953 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.047417 kubelet[2565]: W0912 17:34:13.047082 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.047417 kubelet[2565]: E0912 17:34:13.047107 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.048321 kubelet[2565]: E0912 17:34:13.048224 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.048321 kubelet[2565]: W0912 17:34:13.048238 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.048321 kubelet[2565]: E0912 17:34:13.048251 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.048674 kubelet[2565]: E0912 17:34:13.048618 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:13.048674 kubelet[2565]: W0912 17:34:13.048640 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:13.048674 kubelet[2565]: E0912 17:34:13.048652 2565 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:13.702504 containerd[1494]: time="2025-09-12T17:34:13.702434583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:13.703907 containerd[1494]: time="2025-09-12T17:34:13.703748644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:34:13.705669 containerd[1494]: time="2025-09-12T17:34:13.704623135Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:13.707305 containerd[1494]: time="2025-09-12T17:34:13.706510618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:13.707305 containerd[1494]: time="2025-09-12T17:34:13.707191072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.68599135s" Sep 12 17:34:13.707305 containerd[1494]: time="2025-09-12T17:34:13.707227423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:34:13.711321 containerd[1494]: time="2025-09-12T17:34:13.711266105Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:34:13.732461 containerd[1494]: time="2025-09-12T17:34:13.732409291Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d\"" Sep 12 17:34:13.733207 containerd[1494]: time="2025-09-12T17:34:13.733154149Z" level=info msg="StartContainer for \"292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d\"" Sep 12 17:34:13.775602 systemd[1]: Started cri-containerd-292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d.scope - libcontainer container 292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d. Sep 12 17:34:13.814142 containerd[1494]: time="2025-09-12T17:34:13.814068568Z" level=info msg="StartContainer for \"292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d\" returns successfully" Sep 12 17:34:13.823952 systemd[1]: cri-containerd-292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d.scope: Deactivated successfully. Sep 12 17:34:13.865160 kubelet[2565]: E0912 17:34:13.865098 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:13.882736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d-rootfs.mount: Deactivated successfully. Sep 12 17:34:13.926221 containerd[1494]: time="2025-09-12T17:34:13.905046364Z" level=info msg="shim disconnected" id=292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d namespace=k8s.io Sep 12 17:34:13.926221 containerd[1494]: time="2025-09-12T17:34:13.926210216Z" level=warning msg="cleaning up after shim disconnected" id=292ad1642f05e44600c16b438b5fe475ae98c3947a9b5b6294497f9758d18a1d namespace=k8s.io Sep 12 17:34:13.926221 containerd[1494]: time="2025-09-12T17:34:13.926225570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:14.021819 kubelet[2565]: I0912 17:34:14.020781 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:14.024217 containerd[1494]: time="2025-09-12T17:34:14.024164070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:34:15.864152 kubelet[2565]: E0912 17:34:15.864084 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:17.696770 containerd[1494]: time="2025-09-12T17:34:17.696727927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.698154 containerd[1494]: time="2025-09-12T17:34:17.697913840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:34:17.698699 containerd[1494]: time="2025-09-12T17:34:17.698661164Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.701625 containerd[1494]: time="2025-09-12T17:34:17.701598938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.702578 containerd[1494]: time="2025-09-12T17:34:17.702435558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.678200148s" Sep 12 17:34:17.702578 containerd[1494]: time="2025-09-12T17:34:17.702460813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:34:17.705877 containerd[1494]: time="2025-09-12T17:34:17.705839812Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:34:17.738579 containerd[1494]: time="2025-09-12T17:34:17.738496033Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39\"" Sep 12 17:34:17.739429 containerd[1494]: time="2025-09-12T17:34:17.739408609Z" level=info msg="StartContainer for \"e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39\"" Sep 12 17:34:17.774424 systemd[1]: Started cri-containerd-e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39.scope - libcontainer container e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39. Sep 12 17:34:17.807088 containerd[1494]: time="2025-09-12T17:34:17.807021029Z" level=info msg="StartContainer for \"e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39\" returns successfully" Sep 12 17:34:17.865251 kubelet[2565]: E0912 17:34:17.865194 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:18.236137 systemd[1]: cri-containerd-e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39.scope: Deactivated successfully. Sep 12 17:34:18.274842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39-rootfs.mount: Deactivated successfully. Sep 12 17:34:18.296086 containerd[1494]: time="2025-09-12T17:34:18.296008933Z" level=info msg="shim disconnected" id=e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39 namespace=k8s.io Sep 12 17:34:18.296086 containerd[1494]: time="2025-09-12T17:34:18.296066879Z" level=warning msg="cleaning up after shim disconnected" id=e5660158d1cb0f4cd0296aae9702a13191d8d69dafdfa85ad3d82253acf39b39 namespace=k8s.io Sep 12 17:34:18.296086 containerd[1494]: time="2025-09-12T17:34:18.296075417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:18.325855 kubelet[2565]: I0912 17:34:18.325542 2565 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:34:18.423893 systemd[1]: Created slice kubepods-besteffort-pod5d41cc7d_de46_4428_bd36_c4653b827654.slice - libcontainer container kubepods-besteffort-pod5d41cc7d_de46_4428_bd36_c4653b827654.slice. Sep 12 17:34:18.434772 systemd[1]: Created slice kubepods-burstable-podddfd8960_9299_46a3_9189_694d4fce081a.slice - libcontainer container kubepods-burstable-podddfd8960_9299_46a3_9189_694d4fce081a.slice. Sep 12 17:34:18.443172 systemd[1]: Created slice kubepods-besteffort-poda684160d_e91c_401a_8d80_a0113d1dea0d.slice - libcontainer container kubepods-besteffort-poda684160d_e91c_401a_8d80_a0113d1dea0d.slice. Sep 12 17:34:18.450015 systemd[1]: Created slice kubepods-burstable-podfe8c226f_3398_4b35_97ba_3a5b1ba242b3.slice - libcontainer container kubepods-burstable-podfe8c226f_3398_4b35_97ba_3a5b1ba242b3.slice. Sep 12 17:34:18.457764 systemd[1]: Created slice kubepods-besteffort-pod3081acf9_a29d_4b4d_97c7_a33056bd9370.slice - libcontainer container kubepods-besteffort-pod3081acf9_a29d_4b4d_97c7_a33056bd9370.slice. Sep 12 17:34:18.465379 systemd[1]: Created slice kubepods-besteffort-pod740d955b_2103_436a_891e_3472e5de0fd4.slice - libcontainer container kubepods-besteffort-pod740d955b_2103_436a_891e_3472e5de0fd4.slice. Sep 12 17:34:18.471739 systemd[1]: Created slice kubepods-besteffort-podd9173bd3_d47c_4dd9_b166_5150b07180f7.slice - libcontainer container kubepods-besteffort-podd9173bd3_d47c_4dd9_b166_5150b07180f7.slice. Sep 12 17:34:18.484164 kubelet[2565]: I0912 17:34:18.484101 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxdz\" (UniqueName: \"kubernetes.io/projected/5d41cc7d-de46-4428-bd36-c4653b827654-kube-api-access-8hxdz\") pod \"calico-kube-controllers-594f5c66f9-sz9rn\" (UID: \"5d41cc7d-de46-4428-bd36-c4653b827654\") " pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" Sep 12 17:34:18.484164 kubelet[2565]: I0912 17:34:18.484157 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zb6\" (UniqueName: \"kubernetes.io/projected/740d955b-2103-436a-891e-3472e5de0fd4-kube-api-access-s5zb6\") pod \"calico-apiserver-5598f4bff7-rqkr4\" (UID: \"740d955b-2103-436a-891e-3472e5de0fd4\") " pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" Sep 12 17:34:18.484164 kubelet[2565]: I0912 17:34:18.484176 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3081acf9-a29d-4b4d-97c7-a33056bd9370-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5bksp\" (UID: \"3081acf9-a29d-4b4d-97c7-a33056bd9370\") " pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:18.484427 kubelet[2565]: I0912 17:34:18.484192 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9173bd3-d47c-4dd9-b166-5150b07180f7-calico-apiserver-certs\") pod \"calico-apiserver-5598f4bff7-6jjd6\" (UID: \"d9173bd3-d47c-4dd9-b166-5150b07180f7\") " pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" Sep 12 17:34:18.484427 kubelet[2565]: I0912 17:34:18.484212 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l78j\" (UniqueName: \"kubernetes.io/projected/fe8c226f-3398-4b35-97ba-3a5b1ba242b3-kube-api-access-9l78j\") pod \"coredns-674b8bbfcf-524kk\" (UID: \"fe8c226f-3398-4b35-97ba-3a5b1ba242b3\") " pod="kube-system/coredns-674b8bbfcf-524kk" Sep 12 17:34:18.484427 kubelet[2565]: I0912 17:34:18.484224 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-backend-key-pair\") pod \"whisker-5d989cb468-c9j2k\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " pod="calico-system/whisker-5d989cb468-c9j2k" Sep 12 17:34:18.484427 kubelet[2565]: I0912 17:34:18.484239 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt7c\" (UniqueName: \"kubernetes.io/projected/a684160d-e91c-401a-8d80-a0113d1dea0d-kube-api-access-kwt7c\") pod \"whisker-5d989cb468-c9j2k\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " pod="calico-system/whisker-5d989cb468-c9j2k" Sep 12 17:34:18.484427 kubelet[2565]: I0912 17:34:18.484253 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhgr\" (UniqueName: \"kubernetes.io/projected/3081acf9-a29d-4b4d-97c7-a33056bd9370-kube-api-access-rfhgr\") pod \"goldmane-54d579b49d-5bksp\" (UID: \"3081acf9-a29d-4b4d-97c7-a33056bd9370\") " pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:18.484598 kubelet[2565]: I0912 17:34:18.484266 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb59\" (UniqueName: \"kubernetes.io/projected/ddfd8960-9299-46a3-9189-694d4fce081a-kube-api-access-8cb59\") pod \"coredns-674b8bbfcf-2drmz\" (UID: \"ddfd8960-9299-46a3-9189-694d4fce081a\") " pod="kube-system/coredns-674b8bbfcf-2drmz" Sep 12 17:34:18.484598 kubelet[2565]: I0912 17:34:18.484282 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-ca-bundle\") pod \"whisker-5d989cb468-c9j2k\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " pod="calico-system/whisker-5d989cb468-c9j2k" Sep 12 17:34:18.484598 kubelet[2565]: I0912 17:34:18.484295 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glch\" (UniqueName: \"kubernetes.io/projected/d9173bd3-d47c-4dd9-b166-5150b07180f7-kube-api-access-2glch\") pod \"calico-apiserver-5598f4bff7-6jjd6\" (UID: \"d9173bd3-d47c-4dd9-b166-5150b07180f7\") " pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" Sep 12 17:34:18.484598 kubelet[2565]: I0912 17:34:18.484309 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddfd8960-9299-46a3-9189-694d4fce081a-config-volume\") pod \"coredns-674b8bbfcf-2drmz\" (UID: \"ddfd8960-9299-46a3-9189-694d4fce081a\") " pod="kube-system/coredns-674b8bbfcf-2drmz" Sep 12 17:34:18.484598 kubelet[2565]: I0912 17:34:18.484333 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d41cc7d-de46-4428-bd36-c4653b827654-tigera-ca-bundle\") pod \"calico-kube-controllers-594f5c66f9-sz9rn\" (UID: \"5d41cc7d-de46-4428-bd36-c4653b827654\") " pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" Sep 12 17:34:18.485515 kubelet[2565]: I0912 17:34:18.484350 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3081acf9-a29d-4b4d-97c7-a33056bd9370-goldmane-key-pair\") pod \"goldmane-54d579b49d-5bksp\" (UID: \"3081acf9-a29d-4b4d-97c7-a33056bd9370\") " pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:18.485515 kubelet[2565]: I0912 17:34:18.484369 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe8c226f-3398-4b35-97ba-3a5b1ba242b3-config-volume\") pod \"coredns-674b8bbfcf-524kk\" (UID: \"fe8c226f-3398-4b35-97ba-3a5b1ba242b3\") " pod="kube-system/coredns-674b8bbfcf-524kk" Sep 12 17:34:18.485515 kubelet[2565]: I0912 17:34:18.484384 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/740d955b-2103-436a-891e-3472e5de0fd4-calico-apiserver-certs\") pod \"calico-apiserver-5598f4bff7-rqkr4\" (UID: \"740d955b-2103-436a-891e-3472e5de0fd4\") " pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" Sep 12 17:34:18.485515 kubelet[2565]: I0912 17:34:18.484396 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3081acf9-a29d-4b4d-97c7-a33056bd9370-config\") pod \"goldmane-54d579b49d-5bksp\" (UID: \"3081acf9-a29d-4b4d-97c7-a33056bd9370\") " pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:18.740162 containerd[1494]: time="2025-09-12T17:34:18.738431009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2drmz,Uid:ddfd8960-9299-46a3-9189-694d4fce081a,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:18.749826 containerd[1494]: time="2025-09-12T17:34:18.749783904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d989cb468-c9j2k,Uid:a684160d-e91c-401a-8d80-a0113d1dea0d,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:18.755366 containerd[1494]: time="2025-09-12T17:34:18.755319610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594f5c66f9-sz9rn,Uid:5d41cc7d-de46-4428-bd36-c4653b827654,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:18.756040 containerd[1494]: time="2025-09-12T17:34:18.755848189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-524kk,Uid:fe8c226f-3398-4b35-97ba-3a5b1ba242b3,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:18.762849 containerd[1494]: time="2025-09-12T17:34:18.761987616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5bksp,Uid:3081acf9-a29d-4b4d-97c7-a33056bd9370,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:18.769650 containerd[1494]: time="2025-09-12T17:34:18.769624980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-rqkr4,Uid:740d955b-2103-436a-891e-3472e5de0fd4,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:18.776063 containerd[1494]: time="2025-09-12T17:34:18.776019562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-6jjd6,Uid:d9173bd3-d47c-4dd9-b166-5150b07180f7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:19.048372 containerd[1494]: time="2025-09-12T17:34:19.048236446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:34:19.140301 containerd[1494]: time="2025-09-12T17:34:19.140150165Z" level=error msg="Failed to destroy network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.144779 containerd[1494]: time="2025-09-12T17:34:19.144740728Z" level=error msg="encountered an error cleaning up failed sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.145209 containerd[1494]: time="2025-09-12T17:34:19.145185217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-rqkr4,Uid:740d955b-2103-436a-891e-3472e5de0fd4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.145860 containerd[1494]: time="2025-09-12T17:34:19.145840944Z" level=error msg="Failed to destroy network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.146524 containerd[1494]: time="2025-09-12T17:34:19.146203776Z" level=error msg="encountered an error cleaning up failed sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.146926 containerd[1494]: time="2025-09-12T17:34:19.146614534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-6jjd6,Uid:d9173bd3-d47c-4dd9-b166-5150b07180f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.147306 kubelet[2565]: E0912 17:34:19.147194 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.147306 kubelet[2565]: E0912 17:34:19.147271 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" Sep 12 17:34:19.147306 kubelet[2565]: E0912 17:34:19.147293 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" Sep 12 17:34:19.147837 kubelet[2565]: E0912 17:34:19.147344 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5598f4bff7-rqkr4_calico-apiserver(740d955b-2103-436a-891e-3472e5de0fd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5598f4bff7-rqkr4_calico-apiserver(740d955b-2103-436a-891e-3472e5de0fd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" podUID="740d955b-2103-436a-891e-3472e5de0fd4" Sep 12 17:34:19.147837 kubelet[2565]: E0912 17:34:19.147459 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.147837 kubelet[2565]: E0912 17:34:19.147483 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" Sep 12 17:34:19.147970 kubelet[2565]: E0912 17:34:19.147506 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" Sep 12 17:34:19.147970 kubelet[2565]: E0912 17:34:19.147534 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5598f4bff7-6jjd6_calico-apiserver(d9173bd3-d47c-4dd9-b166-5150b07180f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5598f4bff7-6jjd6_calico-apiserver(d9173bd3-d47c-4dd9-b166-5150b07180f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" podUID="d9173bd3-d47c-4dd9-b166-5150b07180f7" Sep 12 17:34:19.148690 containerd[1494]: time="2025-09-12T17:34:19.148647373Z" level=error msg="Failed to destroy network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.149030 containerd[1494]: time="2025-09-12T17:34:19.149003482Z" level=error msg="encountered an error cleaning up failed sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.149393 containerd[1494]: time="2025-09-12T17:34:19.149314895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d989cb468-c9j2k,Uid:a684160d-e91c-401a-8d80-a0113d1dea0d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.150189 kubelet[2565]: E0912 17:34:19.150107 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.150234 kubelet[2565]: E0912 17:34:19.150199 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d989cb468-c9j2k" Sep 12 17:34:19.150234 kubelet[2565]: E0912 17:34:19.150216 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d989cb468-c9j2k" Sep 12 17:34:19.150294 kubelet[2565]: E0912 17:34:19.150242 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d989cb468-c9j2k_calico-system(a684160d-e91c-401a-8d80-a0113d1dea0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d989cb468-c9j2k_calico-system(a684160d-e91c-401a-8d80-a0113d1dea0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d989cb468-c9j2k" podUID="a684160d-e91c-401a-8d80-a0113d1dea0d" Sep 12 17:34:19.156836 containerd[1494]: time="2025-09-12T17:34:19.156215065Z" level=error msg="Failed to destroy network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.156836 containerd[1494]: time="2025-09-12T17:34:19.156599434Z" level=error msg="encountered an error cleaning up failed sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.156836 containerd[1494]: time="2025-09-12T17:34:19.156733754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2drmz,Uid:ddfd8960-9299-46a3-9189-694d4fce081a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.157381 kubelet[2565]: E0912 17:34:19.156972 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.157381 kubelet[2565]: E0912 17:34:19.157012 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2drmz" Sep 12 17:34:19.157381 kubelet[2565]: E0912 17:34:19.157029 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2drmz" Sep 12 17:34:19.157463 kubelet[2565]: E0912 17:34:19.157075 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2drmz_kube-system(ddfd8960-9299-46a3-9189-694d4fce081a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2drmz_kube-system(ddfd8960-9299-46a3-9189-694d4fce081a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2drmz" podUID="ddfd8960-9299-46a3-9189-694d4fce081a" Sep 12 17:34:19.161289 containerd[1494]: time="2025-09-12T17:34:19.161237249Z" level=error msg="Failed to destroy network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.161840 containerd[1494]: time="2025-09-12T17:34:19.161820488Z" level=error msg="encountered an error cleaning up failed sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.161990 containerd[1494]: time="2025-09-12T17:34:19.161944997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5bksp,Uid:3081acf9-a29d-4b4d-97c7-a33056bd9370,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.162805 kubelet[2565]: E0912 17:34:19.162774 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.162867 kubelet[2565]: E0912 17:34:19.162816 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:19.163269 kubelet[2565]: E0912 17:34:19.163202 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5bksp" Sep 12 17:34:19.163318 kubelet[2565]: E0912 17:34:19.163285 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5bksp_calico-system(3081acf9-a29d-4b4d-97c7-a33056bd9370)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5bksp_calico-system(3081acf9-a29d-4b4d-97c7-a33056bd9370)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5bksp" podUID="3081acf9-a29d-4b4d-97c7-a33056bd9370" Sep 12 17:34:19.163581 containerd[1494]: time="2025-09-12T17:34:19.163527143Z" level=error msg="Failed to destroy network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.165449 containerd[1494]: time="2025-09-12T17:34:19.165399196Z" level=error msg="encountered an error cleaning up failed sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.165516 containerd[1494]: time="2025-09-12T17:34:19.165461579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594f5c66f9-sz9rn,Uid:5d41cc7d-de46-4428-bd36-c4653b827654,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166146 kubelet[2565]: E0912 17:34:19.165599 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166146 kubelet[2565]: E0912 17:34:19.165631 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" Sep 12 17:34:19.166146 kubelet[2565]: E0912 17:34:19.165648 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" Sep 12 17:34:19.166314 containerd[1494]: time="2025-09-12T17:34:19.165740713Z" level=error msg="Failed to destroy network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166314 containerd[1494]: time="2025-09-12T17:34:19.165941676Z" level=error msg="encountered an error cleaning up failed sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166314 containerd[1494]: time="2025-09-12T17:34:19.165971371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-524kk,Uid:fe8c226f-3398-4b35-97ba-3a5b1ba242b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166406 kubelet[2565]: E0912 17:34:19.165678 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-594f5c66f9-sz9rn_calico-system(5d41cc7d-de46-4428-bd36-c4653b827654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-594f5c66f9-sz9rn_calico-system(5d41cc7d-de46-4428-bd36-c4653b827654)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" podUID="5d41cc7d-de46-4428-bd36-c4653b827654" Sep 12 17:34:19.166406 kubelet[2565]: E0912 17:34:19.166080 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.166406 kubelet[2565]: E0912 17:34:19.166101 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-524kk" Sep 12 17:34:19.166550 kubelet[2565]: E0912 17:34:19.166179 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-524kk" Sep 12 17:34:19.166550 kubelet[2565]: E0912 17:34:19.166213 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-524kk_kube-system(fe8c226f-3398-4b35-97ba-3a5b1ba242b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-524kk_kube-system(fe8c226f-3398-4b35-97ba-3a5b1ba242b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-524kk" podUID="fe8c226f-3398-4b35-97ba-3a5b1ba242b3" Sep 12 17:34:19.434672 kubelet[2565]: I0912 17:34:19.434250 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:19.736225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9-shm.mount: Deactivated successfully. Sep 12 17:34:19.736328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8-shm.mount: Deactivated successfully. Sep 12 17:34:19.736395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3-shm.mount: Deactivated successfully. Sep 12 17:34:19.871107 systemd[1]: Created slice kubepods-besteffort-pod6d64cafd_8d22_4ebe_ad67_ac3e14daa0d6.slice - libcontainer container kubepods-besteffort-pod6d64cafd_8d22_4ebe_ad67_ac3e14daa0d6.slice. Sep 12 17:34:19.874046 containerd[1494]: time="2025-09-12T17:34:19.874007964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktgfz,Uid:6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:19.934036 containerd[1494]: time="2025-09-12T17:34:19.933975222Z" level=error msg="Failed to destroy network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.935860 containerd[1494]: time="2025-09-12T17:34:19.935826700Z" level=error msg="encountered an error cleaning up failed sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.935932 containerd[1494]: time="2025-09-12T17:34:19.935879804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktgfz,Uid:6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.937351 kubelet[2565]: E0912 17:34:19.937177 2565 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:19.937351 kubelet[2565]: E0912 17:34:19.937234 2565 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:19.937351 kubelet[2565]: E0912 17:34:19.937253 2565 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktgfz" Sep 12 17:34:19.937466 kubelet[2565]: E0912 17:34:19.937297 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ktgfz_calico-system(6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ktgfz_calico-system(6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:19.938110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6-shm.mount: Deactivated successfully. Sep 12 17:34:20.048384 kubelet[2565]: I0912 17:34:20.048273 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:20.051239 kubelet[2565]: I0912 17:34:20.051200 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:20.058283 kubelet[2565]: I0912 17:34:20.057853 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:20.060472 kubelet[2565]: I0912 17:34:20.060440 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:20.062942 kubelet[2565]: I0912 17:34:20.062911 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:20.067821 containerd[1494]: time="2025-09-12T17:34:20.067530631Z" level=info msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" Sep 12 17:34:20.069984 containerd[1494]: time="2025-09-12T17:34:20.068777518Z" level=info msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" Sep 12 17:34:20.070036 containerd[1494]: time="2025-09-12T17:34:20.069990091Z" level=info msg="Ensure that sandbox 5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3 in task-service has been cleanup successfully" Sep 12 17:34:20.070099 containerd[1494]: time="2025-09-12T17:34:20.070074232Z" level=info msg="Ensure that sandbox 94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10 in task-service has been cleanup successfully" Sep 12 17:34:20.071812 containerd[1494]: time="2025-09-12T17:34:20.071793493Z" level=info msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" Sep 12 17:34:20.072518 containerd[1494]: time="2025-09-12T17:34:20.072487724Z" level=info msg="Ensure that sandbox 0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6 in task-service has been cleanup successfully" Sep 12 17:34:20.073824 containerd[1494]: time="2025-09-12T17:34:20.073765697Z" level=info msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" Sep 12 17:34:20.073993 containerd[1494]: time="2025-09-12T17:34:20.073958461Z" level=info msg="Ensure that sandbox 45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5 in task-service has been cleanup successfully" Sep 12 17:34:20.075723 containerd[1494]: time="2025-09-12T17:34:20.071935488Z" level=info msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" Sep 12 17:34:20.075723 containerd[1494]: time="2025-09-12T17:34:20.075527068Z" level=info msg="Ensure that sandbox a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498 in task-service has been cleanup successfully" Sep 12 17:34:20.087933 kubelet[2565]: I0912 17:34:20.087347 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:20.088254 containerd[1494]: time="2025-09-12T17:34:20.088218876Z" level=info msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" Sep 12 17:34:20.088460 containerd[1494]: time="2025-09-12T17:34:20.088447105Z" level=info msg="Ensure that sandbox 25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49 in task-service has been cleanup successfully" Sep 12 17:34:20.097939 kubelet[2565]: I0912 17:34:20.097913 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:20.098798 containerd[1494]: time="2025-09-12T17:34:20.098775188Z" level=info msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" Sep 12 17:34:20.099035 containerd[1494]: time="2025-09-12T17:34:20.099022029Z" level=info msg="Ensure that sandbox 0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9 in task-service has been cleanup successfully" Sep 12 17:34:20.103932 kubelet[2565]: I0912 17:34:20.103894 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:20.106490 containerd[1494]: time="2025-09-12T17:34:20.105361428Z" level=info msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" Sep 12 17:34:20.113542 containerd[1494]: time="2025-09-12T17:34:20.113494328Z" level=info msg="Ensure that sandbox dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8 in task-service has been cleanup successfully" Sep 12 17:34:20.152228 containerd[1494]: time="2025-09-12T17:34:20.152172444Z" level=error msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" failed" error="failed to destroy network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.153279 kubelet[2565]: E0912 17:34:20.152929 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:20.160093 containerd[1494]: time="2025-09-12T17:34:20.159252785Z" level=error msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" failed" error="failed to destroy network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.162329 kubelet[2565]: E0912 17:34:20.161468 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:20.162495 kubelet[2565]: E0912 17:34:20.161518 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498"} Sep 12 17:34:20.162653 kubelet[2565]: E0912 17:34:20.162615 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9173bd3-d47c-4dd9-b166-5150b07180f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.162857 kubelet[2565]: E0912 17:34:20.162822 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9173bd3-d47c-4dd9-b166-5150b07180f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" podUID="d9173bd3-d47c-4dd9-b166-5150b07180f7" Sep 12 17:34:20.162997 kubelet[2565]: E0912 17:34:20.153209 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6"} Sep 12 17:34:20.163150 kubelet[2565]: E0912 17:34:20.163136 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.163254 kubelet[2565]: E0912 17:34:20.163240 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktgfz" podUID="6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6" Sep 12 17:34:20.174268 containerd[1494]: time="2025-09-12T17:34:20.174216570Z" level=error msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" failed" error="failed to destroy network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.174606 kubelet[2565]: E0912 17:34:20.174570 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:20.174728 kubelet[2565]: E0912 17:34:20.174713 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8"} Sep 12 17:34:20.174825 kubelet[2565]: E0912 17:34:20.174811 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a684160d-e91c-401a-8d80-a0113d1dea0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.174942 kubelet[2565]: E0912 17:34:20.174926 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a684160d-e91c-401a-8d80-a0113d1dea0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d989cb468-c9j2k" podUID="a684160d-e91c-401a-8d80-a0113d1dea0d" Sep 12 17:34:20.191102 containerd[1494]: time="2025-09-12T17:34:20.191055341Z" level=error msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" failed" error="failed to destroy network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.191548 kubelet[2565]: E0912 17:34:20.191426 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:20.191548 kubelet[2565]: E0912 17:34:20.191473 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10"} Sep 12 17:34:20.191548 kubelet[2565]: E0912 17:34:20.191502 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe8c226f-3398-4b35-97ba-3a5b1ba242b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.191548 kubelet[2565]: E0912 17:34:20.191524 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe8c226f-3398-4b35-97ba-3a5b1ba242b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-524kk" podUID="fe8c226f-3398-4b35-97ba-3a5b1ba242b3" Sep 12 17:34:20.199337 containerd[1494]: time="2025-09-12T17:34:20.198950119Z" level=error msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" failed" error="failed to destroy network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.199912 kubelet[2565]: E0912 17:34:20.199235 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:20.199912 kubelet[2565]: E0912 17:34:20.199267 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5"} Sep 12 17:34:20.199912 kubelet[2565]: E0912 17:34:20.199290 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"740d955b-2103-436a-891e-3472e5de0fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.199912 kubelet[2565]: E0912 17:34:20.199308 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"740d955b-2103-436a-891e-3472e5de0fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" podUID="740d955b-2103-436a-891e-3472e5de0fd4" Sep 12 17:34:20.200069 containerd[1494]: time="2025-09-12T17:34:20.199571974Z" level=error msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" failed" error="failed to destroy network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.200094 kubelet[2565]: E0912 17:34:20.199773 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:20.200094 kubelet[2565]: E0912 17:34:20.199819 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3"} Sep 12 17:34:20.200094 kubelet[2565]: E0912 17:34:20.199839 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddfd8960-9299-46a3-9189-694d4fce081a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.200094 kubelet[2565]: E0912 17:34:20.199855 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddfd8960-9299-46a3-9189-694d4fce081a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2drmz" podUID="ddfd8960-9299-46a3-9189-694d4fce081a" Sep 12 17:34:20.202783 containerd[1494]: time="2025-09-12T17:34:20.202729363Z" level=error msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" failed" error="failed to destroy network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.203239 kubelet[2565]: E0912 17:34:20.203191 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:20.203296 kubelet[2565]: E0912 17:34:20.203250 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9"} Sep 12 17:34:20.203321 kubelet[2565]: E0912 17:34:20.203285 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d41cc7d-de46-4428-bd36-c4653b827654\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.203371 kubelet[2565]: E0912 17:34:20.203317 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d41cc7d-de46-4428-bd36-c4653b827654\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" podUID="5d41cc7d-de46-4428-bd36-c4653b827654" Sep 12 17:34:20.205029 containerd[1494]: time="2025-09-12T17:34:20.204954530Z" level=error msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" failed" error="failed to destroy network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:20.205218 kubelet[2565]: E0912 17:34:20.205173 2565 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:20.205264 kubelet[2565]: E0912 17:34:20.205216 2565 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49"} Sep 12 17:34:20.205264 kubelet[2565]: E0912 17:34:20.205241 2565 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3081acf9-a29d-4b4d-97c7-a33056bd9370\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:20.205354 kubelet[2565]: E0912 17:34:20.205261 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3081acf9-a29d-4b4d-97c7-a33056bd9370\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5bksp" podUID="3081acf9-a29d-4b4d-97c7-a33056bd9370" Sep 12 17:34:23.499585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575933614.mount: Deactivated successfully. Sep 12 17:34:23.590350 containerd[1494]: time="2025-09-12T17:34:23.590278903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:34:23.600295 containerd[1494]: time="2025-09-12T17:34:23.600105095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.544432568s" Sep 12 17:34:23.600295 containerd[1494]: time="2025-09-12T17:34:23.600185866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:34:23.604309 containerd[1494]: time="2025-09-12T17:34:23.604268447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.654065 containerd[1494]: time="2025-09-12T17:34:23.654026363Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.655105 containerd[1494]: time="2025-09-12T17:34:23.654837248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.656685 containerd[1494]: time="2025-09-12T17:34:23.656650521Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:34:23.713356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752880397.mount: Deactivated successfully. Sep 12 17:34:23.724197 containerd[1494]: time="2025-09-12T17:34:23.724114750Z" level=info msg="CreateContainer within sandbox \"a423a2ba9360f6204732f6b86743b19886203cb5214fafcd12ae144f45465ec1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4bb0a0788a573f2956847227c155e3cdb906b95c92af5d2f55e311d8eb00b604\"" Sep 12 17:34:23.735947 containerd[1494]: time="2025-09-12T17:34:23.734700980Z" level=info msg="StartContainer for \"4bb0a0788a573f2956847227c155e3cdb906b95c92af5d2f55e311d8eb00b604\"" Sep 12 17:34:23.950913 systemd[1]: Started cri-containerd-4bb0a0788a573f2956847227c155e3cdb906b95c92af5d2f55e311d8eb00b604.scope - libcontainer container 4bb0a0788a573f2956847227c155e3cdb906b95c92af5d2f55e311d8eb00b604. Sep 12 17:34:23.984618 containerd[1494]: time="2025-09-12T17:34:23.984505602Z" level=info msg="StartContainer for \"4bb0a0788a573f2956847227c155e3cdb906b95c92af5d2f55e311d8eb00b604\" returns successfully" Sep 12 17:34:24.076554 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:34:24.078024 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:34:24.190438 kubelet[2565]: I0912 17:34:24.178168 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gvbw8" podStartSLOduration=1.684046806 podStartE2EDuration="15.157904567s" podCreationTimestamp="2025-09-12 17:34:09 +0000 UTC" firstStartedPulling="2025-09-12 17:34:10.127317517 +0000 UTC m=+20.378836342" lastFinishedPulling="2025-09-12 17:34:23.601175278 +0000 UTC m=+33.852694103" observedRunningTime="2025-09-12 17:34:24.14637785 +0000 UTC m=+34.397896675" watchObservedRunningTime="2025-09-12 17:34:24.157904567 +0000 UTC m=+34.409423392" Sep 12 17:34:24.373596 containerd[1494]: time="2025-09-12T17:34:24.373518697Z" level=info msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.467 [INFO][3781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.468 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" iface="eth0" netns="/var/run/netns/cni-6d49e8f6-c6fc-9b21-ebbd-9707acdfc47d" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.469 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" iface="eth0" netns="/var/run/netns/cni-6d49e8f6-c6fc-9b21-ebbd-9707acdfc47d" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.469 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" iface="eth0" netns="/var/run/netns/cni-6d49e8f6-c6fc-9b21-ebbd-9707acdfc47d" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.469 [INFO][3781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.469 [INFO][3781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.586 [INFO][3788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.588 [INFO][3788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.588 [INFO][3788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.597 [WARNING][3788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.597 [INFO][3788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.599 [INFO][3788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:24.603770 containerd[1494]: 2025-09-12 17:34:24.601 [INFO][3781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:24.603770 containerd[1494]: time="2025-09-12T17:34:24.603747784Z" level=info msg="TearDown network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" successfully" Sep 12 17:34:24.604439 containerd[1494]: time="2025-09-12T17:34:24.603786365Z" level=info msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" returns successfully" Sep 12 17:34:24.612057 systemd[1]: run-netns-cni\x2d6d49e8f6\x2dc6fc\x2d9b21\x2debbd\x2d9707acdfc47d.mount: Deactivated successfully. Sep 12 17:34:24.741311 kubelet[2565]: I0912 17:34:24.740780 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-backend-key-pair\") pod \"a684160d-e91c-401a-8d80-a0113d1dea0d\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " Sep 12 17:34:24.741311 kubelet[2565]: I0912 17:34:24.740905 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwt7c\" (UniqueName: \"kubernetes.io/projected/a684160d-e91c-401a-8d80-a0113d1dea0d-kube-api-access-kwt7c\") pod \"a684160d-e91c-401a-8d80-a0113d1dea0d\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " Sep 12 17:34:24.741311 kubelet[2565]: I0912 17:34:24.740967 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-ca-bundle\") pod \"a684160d-e91c-401a-8d80-a0113d1dea0d\" (UID: \"a684160d-e91c-401a-8d80-a0113d1dea0d\") " Sep 12 17:34:24.764074 kubelet[2565]: I0912 17:34:24.762755 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a684160d-e91c-401a-8d80-a0113d1dea0d-kube-api-access-kwt7c" (OuterVolumeSpecName: "kube-api-access-kwt7c") pod "a684160d-e91c-401a-8d80-a0113d1dea0d" (UID: "a684160d-e91c-401a-8d80-a0113d1dea0d"). InnerVolumeSpecName "kube-api-access-kwt7c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:34:24.764613 systemd[1]: var-lib-kubelet-pods-a684160d\x2de91c\x2d401a\x2d8d80\x2da0113d1dea0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwt7c.mount: Deactivated successfully. Sep 12 17:34:24.766442 kubelet[2565]: I0912 17:34:24.761811 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a684160d-e91c-401a-8d80-a0113d1dea0d" (UID: "a684160d-e91c-401a-8d80-a0113d1dea0d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:34:24.769867 systemd[1]: var-lib-kubelet-pods-a684160d\x2de91c\x2d401a\x2d8d80\x2da0113d1dea0d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:34:24.770712 kubelet[2565]: I0912 17:34:24.770578 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a684160d-e91c-401a-8d80-a0113d1dea0d" (UID: "a684160d-e91c-401a-8d80-a0113d1dea0d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:34:24.842079 kubelet[2565]: I0912 17:34:24.842023 2565 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-backend-key-pair\") on node \"ci-4081-3-6-2-340685d2b8\" DevicePath \"\"" Sep 12 17:34:24.842079 kubelet[2565]: I0912 17:34:24.842059 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwt7c\" (UniqueName: \"kubernetes.io/projected/a684160d-e91c-401a-8d80-a0113d1dea0d-kube-api-access-kwt7c\") on node \"ci-4081-3-6-2-340685d2b8\" DevicePath \"\"" Sep 12 17:34:24.842079 kubelet[2565]: I0912 17:34:24.842069 2565 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a684160d-e91c-401a-8d80-a0113d1dea0d-whisker-ca-bundle\") on node \"ci-4081-3-6-2-340685d2b8\" DevicePath \"\"" Sep 12 17:34:25.127748 kubelet[2565]: I0912 17:34:25.127638 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:25.132371 systemd[1]: Removed slice kubepods-besteffort-poda684160d_e91c_401a_8d80_a0113d1dea0d.slice - libcontainer container kubepods-besteffort-poda684160d_e91c_401a_8d80_a0113d1dea0d.slice. Sep 12 17:34:25.267588 systemd[1]: Created slice kubepods-besteffort-poda2824dba_f4c1_4162_879f_912bb79c57ea.slice - libcontainer container kubepods-besteffort-poda2824dba_f4c1_4162_879f_912bb79c57ea.slice. Sep 12 17:34:25.269926 kubelet[2565]: I0912 17:34:25.269851 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6svvh\" (UniqueName: \"kubernetes.io/projected/a2824dba-f4c1-4162-879f-912bb79c57ea-kube-api-access-6svvh\") pod \"whisker-5cd4fff57f-rcs8x\" (UID: \"a2824dba-f4c1-4162-879f-912bb79c57ea\") " pod="calico-system/whisker-5cd4fff57f-rcs8x" Sep 12 17:34:25.270229 kubelet[2565]: I0912 17:34:25.269940 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2824dba-f4c1-4162-879f-912bb79c57ea-whisker-backend-key-pair\") pod \"whisker-5cd4fff57f-rcs8x\" (UID: \"a2824dba-f4c1-4162-879f-912bb79c57ea\") " pod="calico-system/whisker-5cd4fff57f-rcs8x" Sep 12 17:34:25.270229 kubelet[2565]: I0912 17:34:25.270003 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2824dba-f4c1-4162-879f-912bb79c57ea-whisker-ca-bundle\") pod \"whisker-5cd4fff57f-rcs8x\" (UID: \"a2824dba-f4c1-4162-879f-912bb79c57ea\") " pod="calico-system/whisker-5cd4fff57f-rcs8x" Sep 12 17:34:25.575821 containerd[1494]: time="2025-09-12T17:34:25.575772189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd4fff57f-rcs8x,Uid:a2824dba-f4c1-4162-879f-912bb79c57ea,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:25.789604 systemd-networkd[1391]: cali446cd2dc09f: Link UP Sep 12 17:34:25.792811 systemd-networkd[1391]: cali446cd2dc09f: Gained carrier Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.652 [INFO][3814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.668 [INFO][3814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0 whisker-5cd4fff57f- calico-system a2824dba-f4c1-4162-879f-912bb79c57ea 920 0 2025-09-12 17:34:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cd4fff57f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 whisker-5cd4fff57f-rcs8x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali446cd2dc09f [] [] }} ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.668 [INFO][3814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.716 [INFO][3840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" HandleID="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.717 [INFO][3840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" HandleID="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"whisker-5cd4fff57f-rcs8x", "timestamp":"2025-09-12 17:34:25.716658464 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.717 [INFO][3840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.717 [INFO][3840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.717 [INFO][3840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.729 [INFO][3840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.741 [INFO][3840] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.746 [INFO][3840] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.748 [INFO][3840] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.750 [INFO][3840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.750 [INFO][3840] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.751 [INFO][3840] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481 Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.754 [INFO][3840] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.762 [INFO][3840] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.65/26] block=192.168.52.64/26 handle="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.762 [INFO][3840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.65/26] handle="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.762 [INFO][3840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:25.827295 containerd[1494]: 2025-09-12 17:34:25.762 [INFO][3840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.65/26] IPv6=[] ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" HandleID="k8s-pod-network.a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.765 [INFO][3814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0", GenerateName:"whisker-5cd4fff57f-", Namespace:"calico-system", SelfLink:"", UID:"a2824dba-f4c1-4162-879f-912bb79c57ea", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd4fff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"whisker-5cd4fff57f-rcs8x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali446cd2dc09f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.765 [INFO][3814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.65/32] ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.765 [INFO][3814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali446cd2dc09f ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.785 [INFO][3814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.793 [INFO][3814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0", GenerateName:"whisker-5cd4fff57f-", Namespace:"calico-system", SelfLink:"", UID:"a2824dba-f4c1-4162-879f-912bb79c57ea", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd4fff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481", Pod:"whisker-5cd4fff57f-rcs8x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali446cd2dc09f", MAC:"e6:9d:d3:06:18:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.830855 containerd[1494]: 2025-09-12 17:34:25.820 [INFO][3814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481" Namespace="calico-system" Pod="whisker-5cd4fff57f-rcs8x" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5cd4fff57f--rcs8x-eth0" Sep 12 17:34:25.862435 containerd[1494]: time="2025-09-12T17:34:25.861908835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.862729 containerd[1494]: time="2025-09-12T17:34:25.862708090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.862869 containerd[1494]: time="2025-09-12T17:34:25.862807177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.863333 containerd[1494]: time="2025-09-12T17:34:25.863083957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.876765 kubelet[2565]: I0912 17:34:25.876631 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a684160d-e91c-401a-8d80-a0113d1dea0d" path="/var/lib/kubelet/pods/a684160d-e91c-401a-8d80-a0113d1dea0d/volumes" Sep 12 17:34:25.894805 systemd[1]: Started cri-containerd-a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481.scope - libcontainer container a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481. Sep 12 17:34:25.979793 containerd[1494]: time="2025-09-12T17:34:25.979752008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd4fff57f-rcs8x,Uid:a2824dba-f4c1-4162-879f-912bb79c57ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481\"" Sep 12 17:34:25.982806 containerd[1494]: time="2025-09-12T17:34:25.982739898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:34:26.185206 kernel: bpftool[3997]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:34:26.424632 systemd-networkd[1391]: vxlan.calico: Link UP Sep 12 17:34:26.424642 systemd-networkd[1391]: vxlan.calico: Gained carrier Sep 12 17:34:27.377556 systemd-networkd[1391]: cali446cd2dc09f: Gained IPv6LL Sep 12 17:34:27.561874 containerd[1494]: time="2025-09-12T17:34:27.561783907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.573437 containerd[1494]: time="2025-09-12T17:34:27.573349452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:34:27.574505 containerd[1494]: time="2025-09-12T17:34:27.574445681Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.576522 containerd[1494]: time="2025-09-12T17:34:27.576501536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.577340 containerd[1494]: time="2025-09-12T17:34:27.577251354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.594291786s" Sep 12 17:34:27.577410 containerd[1494]: time="2025-09-12T17:34:27.577342352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:34:27.587237 containerd[1494]: time="2025-09-12T17:34:27.587095078Z" level=info msg="CreateContainer within sandbox \"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:34:27.600741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137726310.mount: Deactivated successfully. Sep 12 17:34:27.602332 containerd[1494]: time="2025-09-12T17:34:27.601511901Z" level=info msg="CreateContainer within sandbox \"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9267d4bbe23d7c9ef541ddc36cdb94d8c4f14307bf1be81c634feacea6b37f50\"" Sep 12 17:34:27.603191 containerd[1494]: time="2025-09-12T17:34:27.602541171Z" level=info msg="StartContainer for \"9267d4bbe23d7c9ef541ddc36cdb94d8c4f14307bf1be81c634feacea6b37f50\"" Sep 12 17:34:27.643368 systemd[1]: Started cri-containerd-9267d4bbe23d7c9ef541ddc36cdb94d8c4f14307bf1be81c634feacea6b37f50.scope - libcontainer container 9267d4bbe23d7c9ef541ddc36cdb94d8c4f14307bf1be81c634feacea6b37f50. Sep 12 17:34:27.678063 containerd[1494]: time="2025-09-12T17:34:27.677771433Z" level=info msg="StartContainer for \"9267d4bbe23d7c9ef541ddc36cdb94d8c4f14307bf1be81c634feacea6b37f50\" returns successfully" Sep 12 17:34:27.680820 containerd[1494]: time="2025-09-12T17:34:27.680331725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:34:28.465303 systemd-networkd[1391]: vxlan.calico: Gained IPv6LL Sep 12 17:34:29.951638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118494359.mount: Deactivated successfully. Sep 12 17:34:29.964616 containerd[1494]: time="2025-09-12T17:34:29.964567777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.965759 containerd[1494]: time="2025-09-12T17:34:29.965599458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:34:29.967792 containerd[1494]: time="2025-09-12T17:34:29.966557085Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.968997 containerd[1494]: time="2025-09-12T17:34:29.968969595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.969604 containerd[1494]: time="2025-09-12T17:34:29.969582682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.289212486s" Sep 12 17:34:29.969677 containerd[1494]: time="2025-09-12T17:34:29.969665151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:34:29.974014 containerd[1494]: time="2025-09-12T17:34:29.973885895Z" level=info msg="CreateContainer within sandbox \"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:34:30.009522 containerd[1494]: time="2025-09-12T17:34:30.009473204Z" level=info msg="CreateContainer within sandbox \"a3637b54923b3569fd40d9872253b7b1bb3e961cff8217d259cee71e2df92481\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cdbe872e8876cad9596ada20f4b162581a4d6ae9b8a860756c489882581c0863\"" Sep 12 17:34:30.010481 containerd[1494]: time="2025-09-12T17:34:30.010450235Z" level=info msg="StartContainer for \"cdbe872e8876cad9596ada20f4b162581a4d6ae9b8a860756c489882581c0863\"" Sep 12 17:34:30.052453 systemd[1]: Started cri-containerd-cdbe872e8876cad9596ada20f4b162581a4d6ae9b8a860756c489882581c0863.scope - libcontainer container cdbe872e8876cad9596ada20f4b162581a4d6ae9b8a860756c489882581c0863. Sep 12 17:34:30.097507 containerd[1494]: time="2025-09-12T17:34:30.097470747Z" level=info msg="StartContainer for \"cdbe872e8876cad9596ada20f4b162581a4d6ae9b8a860756c489882581c0863\" returns successfully" Sep 12 17:34:30.184353 kubelet[2565]: I0912 17:34:30.184265 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5cd4fff57f-rcs8x" podStartSLOduration=1.195671568 podStartE2EDuration="5.184242177s" podCreationTimestamp="2025-09-12 17:34:25 +0000 UTC" firstStartedPulling="2025-09-12 17:34:25.98209627 +0000 UTC m=+36.233615095" lastFinishedPulling="2025-09-12 17:34:29.970666879 +0000 UTC m=+40.222185704" observedRunningTime="2025-09-12 17:34:30.183067199 +0000 UTC m=+40.434586024" watchObservedRunningTime="2025-09-12 17:34:30.184242177 +0000 UTC m=+40.435761002" Sep 12 17:34:30.865325 containerd[1494]: time="2025-09-12T17:34:30.865263132Z" level=info msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.909 [INFO][4167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.909 [INFO][4167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" iface="eth0" netns="/var/run/netns/cni-69c25ec0-74df-9efb-aec2-1fd0657a448d" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.910 [INFO][4167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" iface="eth0" netns="/var/run/netns/cni-69c25ec0-74df-9efb-aec2-1fd0657a448d" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.910 [INFO][4167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" iface="eth0" netns="/var/run/netns/cni-69c25ec0-74df-9efb-aec2-1fd0657a448d" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.910 [INFO][4167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.910 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.938 [INFO][4174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.938 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.939 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.944 [WARNING][4174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.944 [INFO][4174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.946 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:30.950623 containerd[1494]: 2025-09-12 17:34:30.948 [INFO][4167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:30.954781 containerd[1494]: time="2025-09-12T17:34:30.950777215Z" level=info msg="TearDown network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" successfully" Sep 12 17:34:30.954781 containerd[1494]: time="2025-09-12T17:34:30.950807517Z" level=info msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" returns successfully" Sep 12 17:34:30.954781 containerd[1494]: time="2025-09-12T17:34:30.952470119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-rqkr4,Uid:740d955b-2103-436a-891e-3472e5de0fd4,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:34:30.954170 systemd[1]: run-netns-cni\x2d69c25ec0\x2d74df\x2d9efb\x2daec2\x2d1fd0657a448d.mount: Deactivated successfully. Sep 12 17:34:31.078195 systemd-networkd[1391]: cali13d2313f9d0: Link UP Sep 12 17:34:31.078432 systemd-networkd[1391]: cali13d2313f9d0: Gained carrier Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.000 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0 calico-apiserver-5598f4bff7- calico-apiserver 740d955b-2103-436a-891e-3472e5de0fd4 950 0 2025-09-12 17:34:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5598f4bff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 calico-apiserver-5598f4bff7-rqkr4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13d2313f9d0 [] [] }} ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.002 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.034 [INFO][4192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" HandleID="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.034 [INFO][4192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" HandleID="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-2-340685d2b8", "pod":"calico-apiserver-5598f4bff7-rqkr4", "timestamp":"2025-09-12 17:34:31.034114434 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.034 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.034 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.034 [INFO][4192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.041 [INFO][4192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.048 [INFO][4192] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.052 [INFO][4192] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.054 [INFO][4192] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.056 [INFO][4192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.056 [INFO][4192] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.057 [INFO][4192] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087 Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.063 [INFO][4192] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.071 [INFO][4192] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.66/26] block=192.168.52.64/26 handle="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.071 [INFO][4192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.66/26] handle="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.071 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:31.103910 containerd[1494]: 2025-09-12 17:34:31.071 [INFO][4192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.66/26] IPv6=[] ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" HandleID="k8s-pod-network.9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.074 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"740d955b-2103-436a-891e-3472e5de0fd4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"calico-apiserver-5598f4bff7-rqkr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13d2313f9d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.074 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.66/32] ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.074 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13d2313f9d0 ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.077 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.079 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"740d955b-2103-436a-891e-3472e5de0fd4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087", Pod:"calico-apiserver-5598f4bff7-rqkr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13d2313f9d0", MAC:"0e:59:0a:ce:bd:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:31.107222 containerd[1494]: 2025-09-12 17:34:31.095 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-rqkr4" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:31.131656 containerd[1494]: time="2025-09-12T17:34:31.131085111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:31.131656 containerd[1494]: time="2025-09-12T17:34:31.131197763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:31.131656 containerd[1494]: time="2025-09-12T17:34:31.131217923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:31.131656 containerd[1494]: time="2025-09-12T17:34:31.131414097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:31.162293 systemd[1]: Started cri-containerd-9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087.scope - libcontainer container 9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087. Sep 12 17:34:31.203035 containerd[1494]: time="2025-09-12T17:34:31.202717206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-rqkr4,Uid:740d955b-2103-436a-891e-3472e5de0fd4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087\"" Sep 12 17:34:31.206270 containerd[1494]: time="2025-09-12T17:34:31.206179430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:34:31.866399 containerd[1494]: time="2025-09-12T17:34:31.866344291Z" level=info msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" iface="eth0" netns="/var/run/netns/cni-fcaf4fd9-e2c0-c3ce-ec31-3a5cbae344e2" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" iface="eth0" netns="/var/run/netns/cni-fcaf4fd9-e2c0-c3ce-ec31-3a5cbae344e2" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" iface="eth0" netns="/var/run/netns/cni-fcaf4fd9-e2c0-c3ce-ec31-3a5cbae344e2" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.910 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.934 [INFO][4272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.934 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.934 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.940 [WARNING][4272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.940 [INFO][4272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.942 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:31.946073 containerd[1494]: 2025-09-12 17:34:31.944 [INFO][4265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:31.947542 containerd[1494]: time="2025-09-12T17:34:31.946228538Z" level=info msg="TearDown network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" successfully" Sep 12 17:34:31.947542 containerd[1494]: time="2025-09-12T17:34:31.946258049Z" level=info msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" returns successfully" Sep 12 17:34:31.947542 containerd[1494]: time="2025-09-12T17:34:31.947040554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5bksp,Uid:3081acf9-a29d-4b4d-97c7-a33056bd9370,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:31.953896 systemd[1]: run-netns-cni\x2dfcaf4fd9\x2de2c0\x2dc3ce\x2dec31\x2d3a5cbae344e2.mount: Deactivated successfully. Sep 12 17:34:32.065221 systemd-networkd[1391]: cali8d349f8a57c: Link UP Sep 12 17:34:32.068263 systemd-networkd[1391]: cali8d349f8a57c: Gained carrier Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:31.995 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0 goldmane-54d579b49d- calico-system 3081acf9-a29d-4b4d-97c7-a33056bd9370 959 0 2025-09-12 17:34:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 goldmane-54d579b49d-5bksp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8d349f8a57c [] [] }} ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:31.995 [INFO][4278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.023 [INFO][4291] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" HandleID="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.024 [INFO][4291] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" HandleID="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"goldmane-54d579b49d-5bksp", "timestamp":"2025-09-12 17:34:32.023914463 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.024 [INFO][4291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.024 [INFO][4291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.024 [INFO][4291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.032 [INFO][4291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.037 [INFO][4291] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.042 [INFO][4291] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.044 [INFO][4291] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.047 [INFO][4291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.047 [INFO][4291] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.048 [INFO][4291] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.053 [INFO][4291] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.059 [INFO][4291] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.67/26] block=192.168.52.64/26 handle="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.059 [INFO][4291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.67/26] handle="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.059 [INFO][4291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:32.089663 containerd[1494]: 2025-09-12 17:34:32.059 [INFO][4291] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.67/26] IPv6=[] ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" HandleID="k8s-pod-network.7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.062 [INFO][4278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3081acf9-a29d-4b4d-97c7-a33056bd9370", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"goldmane-54d579b49d-5bksp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d349f8a57c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.062 [INFO][4278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.67/32] ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.062 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d349f8a57c ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.069 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.069 [INFO][4278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3081acf9-a29d-4b4d-97c7-a33056bd9370", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f", Pod:"goldmane-54d579b49d-5bksp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d349f8a57c", MAC:"ca:dc:3d:80:26:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:32.091834 containerd[1494]: 2025-09-12 17:34:32.086 [INFO][4278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f" Namespace="calico-system" Pod="goldmane-54d579b49d-5bksp" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:32.111518 containerd[1494]: time="2025-09-12T17:34:32.111078718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:32.111981 containerd[1494]: time="2025-09-12T17:34:32.111953046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:32.112147 containerd[1494]: time="2025-09-12T17:34:32.112066960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:32.112299 containerd[1494]: time="2025-09-12T17:34:32.112254924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:32.140332 systemd[1]: Started cri-containerd-7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f.scope - libcontainer container 7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f. Sep 12 17:34:32.184492 containerd[1494]: time="2025-09-12T17:34:32.184447423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5bksp,Uid:3081acf9-a29d-4b4d-97c7-a33056bd9370,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f\"" Sep 12 17:34:32.561341 systemd-networkd[1391]: cali13d2313f9d0: Gained IPv6LL Sep 12 17:34:32.866420 containerd[1494]: time="2025-09-12T17:34:32.865840894Z" level=info msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" Sep 12 17:34:32.866420 containerd[1494]: time="2025-09-12T17:34:32.866330215Z" level=info msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.947 [INFO][4369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.947 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" iface="eth0" netns="/var/run/netns/cni-a0980ed9-a873-9138-8d76-e7c7db4271ae" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.947 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" iface="eth0" netns="/var/run/netns/cni-a0980ed9-a873-9138-8d76-e7c7db4271ae" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.949 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" iface="eth0" netns="/var/run/netns/cni-a0980ed9-a873-9138-8d76-e7c7db4271ae" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.952 [INFO][4369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.952 [INFO][4369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.987 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.987 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.987 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.994 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.994 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:32.997 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:33.007543 containerd[1494]: 2025-09-12 17:34:33.003 [INFO][4369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:33.009678 containerd[1494]: time="2025-09-12T17:34:33.009213768Z" level=info msg="TearDown network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" successfully" Sep 12 17:34:33.009678 containerd[1494]: time="2025-09-12T17:34:33.009278531Z" level=info msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" returns successfully" Sep 12 17:34:33.014159 systemd[1]: run-netns-cni\x2da0980ed9\x2da873\x2d9138\x2d8d76\x2de7c7db4271ae.mount: Deactivated successfully. Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.944 [INFO][4364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.944 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" iface="eth0" netns="/var/run/netns/cni-5c289102-d8a2-71b3-92b6-964a80b79df0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.945 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" iface="eth0" netns="/var/run/netns/cni-5c289102-d8a2-71b3-92b6-964a80b79df0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.945 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" iface="eth0" netns="/var/run/netns/cni-5c289102-d8a2-71b3-92b6-964a80b79df0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.945 [INFO][4364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.945 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.996 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.997 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:32.997 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:33.005 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:33.005 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:33.008 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:33.014520 containerd[1494]: 2025-09-12 17:34:33.010 [INFO][4364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:33.017638 containerd[1494]: time="2025-09-12T17:34:33.014622236Z" level=info msg="TearDown network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" successfully" Sep 12 17:34:33.017638 containerd[1494]: time="2025-09-12T17:34:33.014672720Z" level=info msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" returns successfully" Sep 12 17:34:33.017638 containerd[1494]: time="2025-09-12T17:34:33.015624130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2drmz,Uid:ddfd8960-9299-46a3-9189-694d4fce081a,Namespace:kube-system,Attempt:1,}" Sep 12 17:34:33.018480 containerd[1494]: time="2025-09-12T17:34:33.017989738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-6jjd6,Uid:d9173bd3-d47c-4dd9-b166-5150b07180f7,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:34:33.019870 systemd[1]: run-netns-cni\x2d5c289102\x2dd8a2\x2d71b3\x2d92b6\x2d964a80b79df0.mount: Deactivated successfully. Sep 12 17:34:33.181269 systemd-networkd[1391]: cali9ddc65ee462: Link UP Sep 12 17:34:33.183707 systemd-networkd[1391]: cali9ddc65ee462: Gained carrier Sep 12 17:34:33.201715 systemd-networkd[1391]: cali8d349f8a57c: Gained IPv6LL Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.097 [INFO][4396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0 calico-apiserver-5598f4bff7- calico-apiserver d9173bd3-d47c-4dd9-b166-5150b07180f7 966 0 2025-09-12 17:34:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5598f4bff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 calico-apiserver-5598f4bff7-6jjd6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ddc65ee462 [] [] }} ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.097 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.136 [INFO][4421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" HandleID="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.137 [INFO][4421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" HandleID="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-2-340685d2b8", "pod":"calico-apiserver-5598f4bff7-6jjd6", "timestamp":"2025-09-12 17:34:33.136934361 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.137 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.137 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.137 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.144 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.148 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.155 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.157 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.160 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.160 [INFO][4421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.161 [INFO][4421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5 Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.166 [INFO][4421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.173 [INFO][4421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.68/26] block=192.168.52.64/26 handle="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.173 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.68/26] handle="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.174 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:33.221570 containerd[1494]: 2025-09-12 17:34:33.174 [INFO][4421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.68/26] IPv6=[] ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" HandleID="k8s-pod-network.1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.177 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9173bd3-d47c-4dd9-b166-5150b07180f7", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"calico-apiserver-5598f4bff7-6jjd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc65ee462", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.177 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.68/32] ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.177 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ddc65ee462 ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.184 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.185 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9173bd3-d47c-4dd9-b166-5150b07180f7", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5", Pod:"calico-apiserver-5598f4bff7-6jjd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc65ee462", MAC:"7a:1d:83:f6:67:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:33.222761 containerd[1494]: 2025-09-12 17:34:33.202 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5" Namespace="calico-apiserver" Pod="calico-apiserver-5598f4bff7-6jjd6" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:33.262090 containerd[1494]: time="2025-09-12T17:34:33.261813636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:33.262090 containerd[1494]: time="2025-09-12T17:34:33.261875281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:33.262090 containerd[1494]: time="2025-09-12T17:34:33.261885903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:33.262090 containerd[1494]: time="2025-09-12T17:34:33.262026351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:33.300729 systemd[1]: Started cri-containerd-1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5.scope - libcontainer container 1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5. Sep 12 17:34:33.306060 systemd-networkd[1391]: calie199f4a5e4f: Link UP Sep 12 17:34:33.307784 systemd-networkd[1391]: calie199f4a5e4f: Gained carrier Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.113 [INFO][4402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0 coredns-674b8bbfcf- kube-system ddfd8960-9299-46a3-9189-694d4fce081a 967 0 2025-09-12 17:33:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 coredns-674b8bbfcf-2drmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie199f4a5e4f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.113 [INFO][4402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.203 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" HandleID="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.204 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" HandleID="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"coredns-674b8bbfcf-2drmz", "timestamp":"2025-09-12 17:34:33.203879624 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.204 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.204 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.204 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.245 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.255 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.260 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.265 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.270 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.270 [INFO][4428] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.273 [INFO][4428] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473 Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.281 [INFO][4428] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.292 [INFO][4428] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.69/26] block=192.168.52.64/26 handle="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.292 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.69/26] handle="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.292 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:33.337554 containerd[1494]: 2025-09-12 17:34:33.292 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.69/26] IPv6=[] ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" HandleID="k8s-pod-network.54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.298 [INFO][4402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddfd8960-9299-46a3-9189-694d4fce081a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"coredns-674b8bbfcf-2drmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie199f4a5e4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.299 [INFO][4402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.69/32] ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.299 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie199f4a5e4f ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.307 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.310 [INFO][4402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddfd8960-9299-46a3-9189-694d4fce081a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473", Pod:"coredns-674b8bbfcf-2drmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie199f4a5e4f", MAC:"f2:38:b2:8e:a3:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:33.338194 containerd[1494]: 2025-09-12 17:34:33.328 [INFO][4402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473" Namespace="kube-system" Pod="coredns-674b8bbfcf-2drmz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:33.378469 containerd[1494]: time="2025-09-12T17:34:33.378242375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:33.378469 containerd[1494]: time="2025-09-12T17:34:33.378298559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:33.378469 containerd[1494]: time="2025-09-12T17:34:33.378311797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:33.378629 containerd[1494]: time="2025-09-12T17:34:33.378393834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:33.408332 systemd[1]: Started cri-containerd-54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473.scope - libcontainer container 54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473. Sep 12 17:34:33.433452 containerd[1494]: time="2025-09-12T17:34:33.433363427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5598f4bff7-6jjd6,Uid:d9173bd3-d47c-4dd9-b166-5150b07180f7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5\"" Sep 12 17:34:33.510388 containerd[1494]: time="2025-09-12T17:34:33.510343498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2drmz,Uid:ddfd8960-9299-46a3-9189-694d4fce081a,Namespace:kube-system,Attempt:1,} returns sandbox id \"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473\"" Sep 12 17:34:33.518288 containerd[1494]: time="2025-09-12T17:34:33.518256948Z" level=info msg="CreateContainer within sandbox \"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:34:33.542313 containerd[1494]: time="2025-09-12T17:34:33.542265505Z" level=info msg="CreateContainer within sandbox \"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38bef98937d1e9df1b627cc979eb8ba4298b2852b23d2fbde9a641d0583be05e\"" Sep 12 17:34:33.544655 containerd[1494]: time="2025-09-12T17:34:33.544620732Z" level=info msg="StartContainer for \"38bef98937d1e9df1b627cc979eb8ba4298b2852b23d2fbde9a641d0583be05e\"" Sep 12 17:34:33.605960 systemd[1]: Started cri-containerd-38bef98937d1e9df1b627cc979eb8ba4298b2852b23d2fbde9a641d0583be05e.scope - libcontainer container 38bef98937d1e9df1b627cc979eb8ba4298b2852b23d2fbde9a641d0583be05e. Sep 12 17:34:33.651944 containerd[1494]: time="2025-09-12T17:34:33.651906842Z" level=info msg="StartContainer for \"38bef98937d1e9df1b627cc979eb8ba4298b2852b23d2fbde9a641d0583be05e\" returns successfully" Sep 12 17:34:33.996808 containerd[1494]: time="2025-09-12T17:34:33.996750922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:33.997781 containerd[1494]: time="2025-09-12T17:34:33.997684298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:34:33.999689 containerd[1494]: time="2025-09-12T17:34:33.998582811Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:34.000697 containerd[1494]: time="2025-09-12T17:34:34.000564796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:34.001160 containerd[1494]: time="2025-09-12T17:34:34.001115269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.794887561s" Sep 12 17:34:34.001160 containerd[1494]: time="2025-09-12T17:34:34.001158738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:34:34.002679 containerd[1494]: time="2025-09-12T17:34:34.002561674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:34:34.007545 containerd[1494]: time="2025-09-12T17:34:34.007484666Z" level=info msg="CreateContainer within sandbox \"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:34.046636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998956337.mount: Deactivated successfully. Sep 12 17:34:34.061877 containerd[1494]: time="2025-09-12T17:34:34.061830150Z" level=info msg="CreateContainer within sandbox \"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f94ee21874a2769918b5c72b820e32f15969d647fb600ba41e787083b27edd2c\"" Sep 12 17:34:34.064749 containerd[1494]: time="2025-09-12T17:34:34.063753437Z" level=info msg="StartContainer for \"f94ee21874a2769918b5c72b820e32f15969d647fb600ba41e787083b27edd2c\"" Sep 12 17:34:34.102334 systemd[1]: Started cri-containerd-f94ee21874a2769918b5c72b820e32f15969d647fb600ba41e787083b27edd2c.scope - libcontainer container f94ee21874a2769918b5c72b820e32f15969d647fb600ba41e787083b27edd2c. Sep 12 17:34:34.142690 containerd[1494]: time="2025-09-12T17:34:34.142644831Z" level=info msg="StartContainer for \"f94ee21874a2769918b5c72b820e32f15969d647fb600ba41e787083b27edd2c\" returns successfully" Sep 12 17:34:34.248777 kubelet[2565]: I0912 17:34:34.248623 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2drmz" podStartSLOduration=39.248602305 podStartE2EDuration="39.248602305s" podCreationTimestamp="2025-09-12 17:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:34.232030028 +0000 UTC m=+44.483548853" watchObservedRunningTime="2025-09-12 17:34:34.248602305 +0000 UTC m=+44.500121130" Sep 12 17:34:34.269285 kubelet[2565]: I0912 17:34:34.269109 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5598f4bff7-rqkr4" podStartSLOduration=25.469896583 podStartE2EDuration="28.268074055s" podCreationTimestamp="2025-09-12 17:34:06 +0000 UTC" firstStartedPulling="2025-09-12 17:34:31.204202393 +0000 UTC m=+41.455721218" lastFinishedPulling="2025-09-12 17:34:34.002379864 +0000 UTC m=+44.253898690" observedRunningTime="2025-09-12 17:34:34.267326433 +0000 UTC m=+44.518845258" watchObservedRunningTime="2025-09-12 17:34:34.268074055 +0000 UTC m=+44.519592880" Sep 12 17:34:34.673304 systemd-networkd[1391]: cali9ddc65ee462: Gained IPv6LL Sep 12 17:34:34.867911 containerd[1494]: time="2025-09-12T17:34:34.866075174Z" level=info msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" Sep 12 17:34:34.868418 containerd[1494]: time="2025-09-12T17:34:34.868368212Z" level=info msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.955 [INFO][4634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.957 [INFO][4634] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" iface="eth0" netns="/var/run/netns/cni-09e74fdb-bed0-7188-5956-4b703240c7f5" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.958 [INFO][4634] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" iface="eth0" netns="/var/run/netns/cni-09e74fdb-bed0-7188-5956-4b703240c7f5" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.959 [INFO][4634] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" iface="eth0" netns="/var/run/netns/cni-09e74fdb-bed0-7188-5956-4b703240c7f5" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.959 [INFO][4634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:34.959 [INFO][4634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.008 [INFO][4648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.009 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.010 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.020 [WARNING][4648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.020 [INFO][4648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.022 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:35.034855 containerd[1494]: 2025-09-12 17:34:35.031 [INFO][4634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:35.043197 containerd[1494]: time="2025-09-12T17:34:35.035100840Z" level=info msg="TearDown network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" successfully" Sep 12 17:34:35.043197 containerd[1494]: time="2025-09-12T17:34:35.035157385Z" level=info msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" returns successfully" Sep 12 17:34:35.043197 containerd[1494]: time="2025-09-12T17:34:35.036083295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktgfz,Uid:6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.976 [INFO][4635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.977 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" iface="eth0" netns="/var/run/netns/cni-5476836c-de51-4ec3-2747-feadd8004c4f" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.977 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" iface="eth0" netns="/var/run/netns/cni-5476836c-de51-4ec3-2747-feadd8004c4f" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.977 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" iface="eth0" netns="/var/run/netns/cni-5476836c-de51-4ec3-2747-feadd8004c4f" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.977 [INFO][4635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:34.977 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.015 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.015 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.022 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.030 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.030 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.032 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:35.043197 containerd[1494]: 2025-09-12 17:34:35.039 [INFO][4635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:35.043197 containerd[1494]: time="2025-09-12T17:34:35.043040896Z" level=info msg="TearDown network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" successfully" Sep 12 17:34:35.043197 containerd[1494]: time="2025-09-12T17:34:35.043087311Z" level=info msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" returns successfully" Sep 12 17:34:35.050618 containerd[1494]: time="2025-09-12T17:34:35.044105609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594f5c66f9-sz9rn,Uid:5d41cc7d-de46-4428-bd36-c4653b827654,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:35.045259 systemd[1]: run-netns-cni\x2d09e74fdb\x2dbed0\x2d7188\x2d5956\x2d4b703240c7f5.mount: Deactivated successfully. Sep 12 17:34:35.054769 systemd[1]: run-netns-cni\x2d5476836c\x2dde51\x2d4ec3\x2d2747\x2dfeadd8004c4f.mount: Deactivated successfully. Sep 12 17:34:35.207981 kubelet[2565]: I0912 17:34:35.207862 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:35.282592 systemd-networkd[1391]: cali6495e4022c2: Link UP Sep 12 17:34:35.283757 systemd-networkd[1391]: cali6495e4022c2: Gained carrier Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.163 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0 calico-kube-controllers-594f5c66f9- calico-system 5d41cc7d-de46-4428-bd36-c4653b827654 1001 0 2025-09-12 17:34:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:594f5c66f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 calico-kube-controllers-594f5c66f9-sz9rn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6495e4022c2 [] [] }} ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.164 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.214 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" HandleID="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.215 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" HandleID="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"calico-kube-controllers-594f5c66f9-sz9rn", "timestamp":"2025-09-12 17:34:35.214936442 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.215 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.215 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.215 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.229 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.235 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.243 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.245 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.249 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.249 [INFO][4686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.251 [INFO][4686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294 Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.261 [INFO][4686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.271 [INFO][4686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.70/26] block=192.168.52.64/26 handle="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.272 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.70/26] handle="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.272 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:35.303268 containerd[1494]: 2025-09-12 17:34:35.272 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.70/26] IPv6=[] ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" HandleID="k8s-pod-network.c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.276 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0", GenerateName:"calico-kube-controllers-594f5c66f9-", Namespace:"calico-system", SelfLink:"", UID:"5d41cc7d-de46-4428-bd36-c4653b827654", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594f5c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"calico-kube-controllers-594f5c66f9-sz9rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6495e4022c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.277 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.70/32] ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.277 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6495e4022c2 ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.284 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.284 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0", GenerateName:"calico-kube-controllers-594f5c66f9-", Namespace:"calico-system", SelfLink:"", UID:"5d41cc7d-de46-4428-bd36-c4653b827654", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594f5c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294", Pod:"calico-kube-controllers-594f5c66f9-sz9rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6495e4022c2", MAC:"c2:43:a0:da:44:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:35.309077 containerd[1494]: 2025-09-12 17:34:35.300 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294" Namespace="calico-system" Pod="calico-kube-controllers-594f5c66f9-sz9rn" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:35.313343 systemd-networkd[1391]: calie199f4a5e4f: Gained IPv6LL Sep 12 17:34:35.343889 containerd[1494]: time="2025-09-12T17:34:35.343579371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:35.343889 containerd[1494]: time="2025-09-12T17:34:35.343642970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:35.343889 containerd[1494]: time="2025-09-12T17:34:35.343660736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:35.343889 containerd[1494]: time="2025-09-12T17:34:35.343741490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:35.366603 systemd[1]: Started cri-containerd-c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294.scope - libcontainer container c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294. Sep 12 17:34:35.406293 systemd-networkd[1391]: cali3b5f6c9d167: Link UP Sep 12 17:34:35.408297 systemd-networkd[1391]: cali3b5f6c9d167: Gained carrier Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.169 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0 csi-node-driver- calico-system 6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6 1000 0 2025-09-12 17:34:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 csi-node-driver-ktgfz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b5f6c9d167 [] [] }} ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.169 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.228 [INFO][4691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" HandleID="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.228 [INFO][4691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" HandleID="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"csi-node-driver-ktgfz", "timestamp":"2025-09-12 17:34:35.228358353 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.228 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.272 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.272 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.333 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.351 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.365 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.371 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.374 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.374 [INFO][4691] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.376 [INFO][4691] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.383 [INFO][4691] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.395 [INFO][4691] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.71/26] block=192.168.52.64/26 handle="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.395 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.71/26] handle="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.395 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:35.452862 containerd[1494]: 2025-09-12 17:34:35.395 [INFO][4691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.71/26] IPv6=[] ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" HandleID="k8s-pod-network.ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.397 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"csi-node-driver-ktgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5f6c9d167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.398 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.71/32] ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.398 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5f6c9d167 ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.411 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.417 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd", Pod:"csi-node-driver-ktgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5f6c9d167", MAC:"ea:ee:c0:1e:1f:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:35.455217 containerd[1494]: 2025-09-12 17:34:35.449 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd" Namespace="calico-system" Pod="csi-node-driver-ktgfz" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:35.472253 containerd[1494]: time="2025-09-12T17:34:35.471878911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:35.472253 containerd[1494]: time="2025-09-12T17:34:35.471933633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:35.472253 containerd[1494]: time="2025-09-12T17:34:35.471952512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:35.472253 containerd[1494]: time="2025-09-12T17:34:35.472049789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:35.497300 systemd[1]: Started cri-containerd-ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd.scope - libcontainer container ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd. Sep 12 17:34:35.504398 containerd[1494]: time="2025-09-12T17:34:35.504335082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594f5c66f9-sz9rn,Uid:5d41cc7d-de46-4428-bd36-c4653b827654,Namespace:calico-system,Attempt:1,} returns sandbox id \"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294\"" Sep 12 17:34:35.533366 containerd[1494]: time="2025-09-12T17:34:35.533319450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktgfz,Uid:6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd\"" Sep 12 17:34:35.576695 kubelet[2565]: I0912 17:34:35.575756 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:35.868961 containerd[1494]: time="2025-09-12T17:34:35.868095321Z" level=info msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.953 [INFO][4835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.954 [INFO][4835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" iface="eth0" netns="/var/run/netns/cni-ef55ebbb-ea5f-8d83-e4e8-191d4ea22220" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.955 [INFO][4835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" iface="eth0" netns="/var/run/netns/cni-ef55ebbb-ea5f-8d83-e4e8-191d4ea22220" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.955 [INFO][4835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" iface="eth0" netns="/var/run/netns/cni-ef55ebbb-ea5f-8d83-e4e8-191d4ea22220" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.955 [INFO][4835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.955 [INFO][4835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.979 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.979 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.979 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.992 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.992 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.993 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:36.011169 containerd[1494]: 2025-09-12 17:34:35.997 [INFO][4835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:36.015697 containerd[1494]: time="2025-09-12T17:34:36.014389021Z" level=info msg="TearDown network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" successfully" Sep 12 17:34:36.015697 containerd[1494]: time="2025-09-12T17:34:36.014421355Z" level=info msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" returns successfully" Sep 12 17:34:36.017143 containerd[1494]: time="2025-09-12T17:34:36.016611635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-524kk,Uid:fe8c226f-3398-4b35-97ba-3a5b1ba242b3,Namespace:kube-system,Attempt:1,}" Sep 12 17:34:36.041064 systemd[1]: run-netns-cni\x2def55ebbb\x2dea5f\x2d8d83\x2de4e8\x2d191d4ea22220.mount: Deactivated successfully. Sep 12 17:34:36.323195 systemd-networkd[1391]: cali0865a0b2c84: Link UP Sep 12 17:34:36.325070 systemd-networkd[1391]: cali0865a0b2c84: Gained carrier Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.186 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0 coredns-674b8bbfcf- kube-system fe8c226f-3398-4b35-97ba-3a5b1ba242b3 1014 0 2025-09-12 17:33:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-2-340685d2b8 coredns-674b8bbfcf-524kk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0865a0b2c84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.187 [INFO][4863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.251 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" HandleID="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.252 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" HandleID="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-2-340685d2b8", "pod":"coredns-674b8bbfcf-524kk", "timestamp":"2025-09-12 17:34:36.251439768 +0000 UTC"}, Hostname:"ci-4081-3-6-2-340685d2b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.252 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.252 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.252 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-2-340685d2b8' Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.267 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.276 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.283 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.286 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.289 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.289 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.295 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96 Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.302 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.312 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.72/26] block=192.168.52.64/26 handle="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.313 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.72/26] handle="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" host="ci-4081-3-6-2-340685d2b8" Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.313 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:36.356825 containerd[1494]: 2025-09-12 17:34:36.313 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.72/26] IPv6=[] ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" HandleID="k8s-pod-network.889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.317 [INFO][4863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe8c226f-3398-4b35-97ba-3a5b1ba242b3", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"", Pod:"coredns-674b8bbfcf-524kk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0865a0b2c84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.317 [INFO][4863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.72/32] ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.317 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0865a0b2c84 ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.326 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.326 [INFO][4863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe8c226f-3398-4b35-97ba-3a5b1ba242b3", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96", Pod:"coredns-674b8bbfcf-524kk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0865a0b2c84", MAC:"0a:3b:23:ed:94:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:36.358580 containerd[1494]: 2025-09-12 17:34:36.346 [INFO][4863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96" Namespace="kube-system" Pod="coredns-674b8bbfcf-524kk" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:36.421143 containerd[1494]: time="2025-09-12T17:34:36.420378630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:36.422129 containerd[1494]: time="2025-09-12T17:34:36.421042755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:36.422129 containerd[1494]: time="2025-09-12T17:34:36.421059148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:36.423467 containerd[1494]: time="2025-09-12T17:34:36.422208909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:36.447332 systemd[1]: Started cri-containerd-889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96.scope - libcontainer container 889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96. Sep 12 17:34:36.467065 systemd-networkd[1391]: cali3b5f6c9d167: Gained IPv6LL Sep 12 17:34:36.529111 containerd[1494]: time="2025-09-12T17:34:36.529065525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-524kk,Uid:fe8c226f-3398-4b35-97ba-3a5b1ba242b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96\"" Sep 12 17:34:36.547077 containerd[1494]: time="2025-09-12T17:34:36.546857632Z" level=info msg="CreateContainer within sandbox \"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:34:36.581174 containerd[1494]: time="2025-09-12T17:34:36.580163189Z" level=info msg="CreateContainer within sandbox \"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a73e4ff0ddccb9300e53517a11f15ce784f2ccce32b8b3339d44748341ce4043\"" Sep 12 17:34:36.583753 containerd[1494]: time="2025-09-12T17:34:36.583715439Z" level=info msg="StartContainer for \"a73e4ff0ddccb9300e53517a11f15ce784f2ccce32b8b3339d44748341ce4043\"" Sep 12 17:34:36.614326 systemd[1]: Started cri-containerd-a73e4ff0ddccb9300e53517a11f15ce784f2ccce32b8b3339d44748341ce4043.scope - libcontainer container a73e4ff0ddccb9300e53517a11f15ce784f2ccce32b8b3339d44748341ce4043. Sep 12 17:34:36.659776 containerd[1494]: time="2025-09-12T17:34:36.659658153Z" level=info msg="StartContainer for \"a73e4ff0ddccb9300e53517a11f15ce784f2ccce32b8b3339d44748341ce4043\" returns successfully" Sep 12 17:34:37.043964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761200600.mount: Deactivated successfully. Sep 12 17:34:37.170389 systemd-networkd[1391]: cali6495e4022c2: Gained IPv6LL Sep 12 17:34:37.301982 kubelet[2565]: I0912 17:34:37.290816 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-524kk" podStartSLOduration=42.29079336 podStartE2EDuration="42.29079336s" podCreationTimestamp="2025-09-12 17:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:37.266299071 +0000 UTC m=+47.517817896" watchObservedRunningTime="2025-09-12 17:34:37.29079336 +0000 UTC m=+47.542312185" Sep 12 17:34:37.564855 containerd[1494]: time="2025-09-12T17:34:37.564722727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.589979 containerd[1494]: time="2025-09-12T17:34:37.589899887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:34:37.614299 containerd[1494]: time="2025-09-12T17:34:37.614187397Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.617105 containerd[1494]: time="2025-09-12T17:34:37.616175497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.619494 containerd[1494]: time="2025-09-12T17:34:37.618872291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.614530987s" Sep 12 17:34:37.619494 containerd[1494]: time="2025-09-12T17:34:37.619033036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:34:37.633698 containerd[1494]: time="2025-09-12T17:34:37.632015031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:34:37.640303 containerd[1494]: time="2025-09-12T17:34:37.639944539Z" level=info msg="CreateContainer within sandbox \"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:34:37.678216 containerd[1494]: time="2025-09-12T17:34:37.677269035Z" level=info msg="CreateContainer within sandbox \"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581\"" Sep 12 17:34:37.693062 containerd[1494]: time="2025-09-12T17:34:37.693003328Z" level=info msg="StartContainer for \"debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581\"" Sep 12 17:34:37.764670 systemd[1]: run-containerd-runc-k8s.io-debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581-runc.CTTqcD.mount: Deactivated successfully. Sep 12 17:34:37.774400 systemd[1]: Started cri-containerd-debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581.scope - libcontainer container debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581. Sep 12 17:34:37.859010 containerd[1494]: time="2025-09-12T17:34:37.858825874Z" level=info msg="StartContainer for \"debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581\" returns successfully" Sep 12 17:34:38.107341 containerd[1494]: time="2025-09-12T17:34:38.107278042Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:38.107941 containerd[1494]: time="2025-09-12T17:34:38.107898744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:34:38.109610 containerd[1494]: time="2025-09-12T17:34:38.109532703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 477.480766ms" Sep 12 17:34:38.109610 containerd[1494]: time="2025-09-12T17:34:38.109566360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:34:38.110691 containerd[1494]: time="2025-09-12T17:34:38.110626681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:34:38.115110 containerd[1494]: time="2025-09-12T17:34:38.115081132Z" level=info msg="CreateContainer within sandbox \"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:38.141273 containerd[1494]: time="2025-09-12T17:34:38.141213111Z" level=info msg="CreateContainer within sandbox \"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c4007a5d99dd388e9ba638f4c245be637d1f832ff68e67b722a21da9b1b6ad3\"" Sep 12 17:34:38.141812 containerd[1494]: time="2025-09-12T17:34:38.141774154Z" level=info msg="StartContainer for \"8c4007a5d99dd388e9ba638f4c245be637d1f832ff68e67b722a21da9b1b6ad3\"" Sep 12 17:34:38.174327 systemd[1]: Started cri-containerd-8c4007a5d99dd388e9ba638f4c245be637d1f832ff68e67b722a21da9b1b6ad3.scope - libcontainer container 8c4007a5d99dd388e9ba638f4c245be637d1f832ff68e67b722a21da9b1b6ad3. Sep 12 17:34:38.214544 containerd[1494]: time="2025-09-12T17:34:38.214381004Z" level=info msg="StartContainer for \"8c4007a5d99dd388e9ba638f4c245be637d1f832ff68e67b722a21da9b1b6ad3\" returns successfully" Sep 12 17:34:38.316967 kubelet[2565]: I0912 17:34:38.315343 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5bksp" podStartSLOduration=23.880777029 podStartE2EDuration="29.315321818s" podCreationTimestamp="2025-09-12 17:34:09 +0000 UTC" firstStartedPulling="2025-09-12 17:34:32.185772787 +0000 UTC m=+42.437291611" lastFinishedPulling="2025-09-12 17:34:37.620317574 +0000 UTC m=+47.871836400" observedRunningTime="2025-09-12 17:34:38.302984394 +0000 UTC m=+48.554503220" watchObservedRunningTime="2025-09-12 17:34:38.315321818 +0000 UTC m=+48.566840644" Sep 12 17:34:38.321321 systemd-networkd[1391]: cali0865a0b2c84: Gained IPv6LL Sep 12 17:34:39.295990 kubelet[2565]: I0912 17:34:39.295442 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:39.295990 kubelet[2565]: I0912 17:34:39.295745 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:42.521709 containerd[1494]: time="2025-09-12T17:34:42.521352966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:42.531330 containerd[1494]: time="2025-09-12T17:34:42.523114668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:34:42.531330 containerd[1494]: time="2025-09-12T17:34:42.524553233Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:42.536959 containerd[1494]: time="2025-09-12T17:34:42.536907817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:42.537474 containerd[1494]: time="2025-09-12T17:34:42.537448541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.426787622s" Sep 12 17:34:42.540173 containerd[1494]: time="2025-09-12T17:34:42.537492599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:34:42.656176 containerd[1494]: time="2025-09-12T17:34:42.655493099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:34:42.880744 containerd[1494]: time="2025-09-12T17:34:42.880641338Z" level=info msg="CreateContainer within sandbox \"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:34:42.923168 containerd[1494]: time="2025-09-12T17:34:42.920045884Z" level=info msg="CreateContainer within sandbox \"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4\"" Sep 12 17:34:42.922193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033904886.mount: Deactivated successfully. Sep 12 17:34:42.954126 containerd[1494]: time="2025-09-12T17:34:42.954075001Z" level=info msg="StartContainer for \"539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4\"" Sep 12 17:34:43.096506 systemd[1]: Started cri-containerd-539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4.scope - libcontainer container 539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4. Sep 12 17:34:43.280015 containerd[1494]: time="2025-09-12T17:34:43.279980316Z" level=info msg="StartContainer for \"539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4\" returns successfully" Sep 12 17:34:43.561730 kubelet[2565]: I0912 17:34:43.559982 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5598f4bff7-6jjd6" podStartSLOduration=32.863962442 podStartE2EDuration="37.532089255s" podCreationTimestamp="2025-09-12 17:34:06 +0000 UTC" firstStartedPulling="2025-09-12 17:34:33.44232738 +0000 UTC m=+43.693846205" lastFinishedPulling="2025-09-12 17:34:38.110454193 +0000 UTC m=+48.361973018" observedRunningTime="2025-09-12 17:34:38.337440857 +0000 UTC m=+48.588959682" watchObservedRunningTime="2025-09-12 17:34:43.532089255 +0000 UTC m=+53.783608080" Sep 12 17:34:43.583816 kubelet[2565]: I0912 17:34:43.583442 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-594f5c66f9-sz9rn" podStartSLOduration=26.438110041 podStartE2EDuration="33.583413218s" podCreationTimestamp="2025-09-12 17:34:10 +0000 UTC" firstStartedPulling="2025-09-12 17:34:35.505929499 +0000 UTC m=+45.757448325" lastFinishedPulling="2025-09-12 17:34:42.651232676 +0000 UTC m=+52.902751502" observedRunningTime="2025-09-12 17:34:43.471949991 +0000 UTC m=+53.723468826" watchObservedRunningTime="2025-09-12 17:34:43.583413218 +0000 UTC m=+53.834932043" Sep 12 17:34:44.001408 kubelet[2565]: I0912 17:34:44.001355 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:44.048820 systemd[1]: run-containerd-runc-k8s.io-debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581-runc.LveNmy.mount: Deactivated successfully. Sep 12 17:34:44.451501 systemd[1]: run-containerd-runc-k8s.io-539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4-runc.ZBKmlc.mount: Deactivated successfully. Sep 12 17:34:44.590296 containerd[1494]: time="2025-09-12T17:34:44.590199686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:44.592074 containerd[1494]: time="2025-09-12T17:34:44.592045397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:34:44.593024 containerd[1494]: time="2025-09-12T17:34:44.592995708Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:44.595200 containerd[1494]: time="2025-09-12T17:34:44.595181409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:44.597313 containerd[1494]: time="2025-09-12T17:34:44.597290326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.94175856s" Sep 12 17:34:44.597847 containerd[1494]: time="2025-09-12T17:34:44.597447691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:34:44.691830 containerd[1494]: time="2025-09-12T17:34:44.691649384Z" level=info msg="CreateContainer within sandbox \"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:34:44.777947 containerd[1494]: time="2025-09-12T17:34:44.777906960Z" level=info msg="CreateContainer within sandbox \"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c173dc276c8c2d607387c98ca1327d8bb44adf9876cf894df88a1b60a04a88e\"" Sep 12 17:34:44.778798 containerd[1494]: time="2025-09-12T17:34:44.778783032Z" level=info msg="StartContainer for \"2c173dc276c8c2d607387c98ca1327d8bb44adf9876cf894df88a1b60a04a88e\"" Sep 12 17:34:44.937810 systemd[1]: Started cri-containerd-2c173dc276c8c2d607387c98ca1327d8bb44adf9876cf894df88a1b60a04a88e.scope - libcontainer container 2c173dc276c8c2d607387c98ca1327d8bb44adf9876cf894df88a1b60a04a88e. Sep 12 17:34:45.086940 systemd[1]: run-containerd-runc-k8s.io-debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581-runc.kkdwNf.mount: Deactivated successfully. Sep 12 17:34:45.106809 containerd[1494]: time="2025-09-12T17:34:45.106683990Z" level=info msg="StartContainer for \"2c173dc276c8c2d607387c98ca1327d8bb44adf9876cf894df88a1b60a04a88e\" returns successfully" Sep 12 17:34:45.118315 containerd[1494]: time="2025-09-12T17:34:45.118264100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:34:46.856516 containerd[1494]: time="2025-09-12T17:34:46.856467815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:46.857575 containerd[1494]: time="2025-09-12T17:34:46.857542779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:34:46.859152 containerd[1494]: time="2025-09-12T17:34:46.858435079Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:46.864146 containerd[1494]: time="2025-09-12T17:34:46.864081464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:46.865083 containerd[1494]: time="2025-09-12T17:34:46.864565109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.746253895s" Sep 12 17:34:46.865083 containerd[1494]: time="2025-09-12T17:34:46.864593075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:34:46.869618 containerd[1494]: time="2025-09-12T17:34:46.869563259Z" level=info msg="CreateContainer within sandbox \"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:34:46.884558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597109777.mount: Deactivated successfully. Sep 12 17:34:46.889421 containerd[1494]: time="2025-09-12T17:34:46.889369853Z" level=info msg="CreateContainer within sandbox \"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e5d30bfc5b7af29653583287f978e7ffb5bd5059bd267bd8012b6cf6572a41df\"" Sep 12 17:34:46.890457 containerd[1494]: time="2025-09-12T17:34:46.890435760Z" level=info msg="StartContainer for \"e5d30bfc5b7af29653583287f978e7ffb5bd5059bd267bd8012b6cf6572a41df\"" Sep 12 17:34:46.931290 systemd[1]: Started cri-containerd-e5d30bfc5b7af29653583287f978e7ffb5bd5059bd267bd8012b6cf6572a41df.scope - libcontainer container e5d30bfc5b7af29653583287f978e7ffb5bd5059bd267bd8012b6cf6572a41df. Sep 12 17:34:46.960515 containerd[1494]: time="2025-09-12T17:34:46.960484165Z" level=info msg="StartContainer for \"e5d30bfc5b7af29653583287f978e7ffb5bd5059bd267bd8012b6cf6572a41df\" returns successfully" Sep 12 17:34:47.139157 kubelet[2565]: I0912 17:34:47.138991 2565 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:34:47.141391 kubelet[2565]: I0912 17:34:47.141367 2565 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:34:47.439674 kubelet[2565]: I0912 17:34:47.439305 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ktgfz" podStartSLOduration=26.108203911 podStartE2EDuration="37.439286769s" podCreationTimestamp="2025-09-12 17:34:10 +0000 UTC" firstStartedPulling="2025-09-12 17:34:35.534572275 +0000 UTC m=+45.786091100" lastFinishedPulling="2025-09-12 17:34:46.865655133 +0000 UTC m=+57.117173958" observedRunningTime="2025-09-12 17:34:47.438727074 +0000 UTC m=+57.690245899" watchObservedRunningTime="2025-09-12 17:34:47.439286769 +0000 UTC m=+57.690805595" Sep 12 17:34:49.965101 containerd[1494]: time="2025-09-12T17:34:49.965035551Z" level=info msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.209 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd", Pod:"csi-node-driver-ktgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5f6c9d167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.212 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.212 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" iface="eth0" netns="" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.212 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.212 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.336 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.339 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.339 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.353 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.353 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.356 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:50.360313 containerd[1494]: 2025-09-12 17:34:50.358 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.368686 containerd[1494]: time="2025-09-12T17:34:50.368563883Z" level=info msg="TearDown network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" successfully" Sep 12 17:34:50.368686 containerd[1494]: time="2025-09-12T17:34:50.368616337Z" level=info msg="StopPodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" returns successfully" Sep 12 17:34:50.494225 containerd[1494]: time="2025-09-12T17:34:50.494174997Z" level=info msg="RemovePodSandbox for \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" Sep 12 17:34:50.502241 containerd[1494]: time="2025-09-12T17:34:50.502188800Z" level=info msg="Forcibly stopping sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\"" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.560 [WARNING][5323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d64cafd-8d22-4ebe-ad67-ac3e14daa0d6", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"ac6d8e1080ca764cc51294337aeb2553829d702c80a49def3d45dba0e74cdabd", Pod:"csi-node-driver-ktgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5f6c9d167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.561 [INFO][5323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.561 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" iface="eth0" netns="" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.561 [INFO][5323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.561 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.601 [INFO][5331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.601 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.601 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.618 [WARNING][5331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.618 [INFO][5331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" HandleID="k8s-pod-network.0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Workload="ci--4081--3--6--2--340685d2b8-k8s-csi--node--driver--ktgfz-eth0" Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.619 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:50.633302 containerd[1494]: 2025-09-12 17:34:50.624 [INFO][5323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6" Sep 12 17:34:50.633302 containerd[1494]: time="2025-09-12T17:34:50.633020729Z" level=info msg="TearDown network for sandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" successfully" Sep 12 17:34:50.682514 containerd[1494]: time="2025-09-12T17:34:50.682287444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:50.717700 containerd[1494]: time="2025-09-12T17:34:50.717593959Z" level=info msg="RemovePodSandbox \"0e7868ff94510824127083aa36717b3da7d6a92159dca66304aaebc59581dcc6\" returns successfully" Sep 12 17:34:50.724399 containerd[1494]: time="2025-09-12T17:34:50.724069757Z" level=info msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.774 [WARNING][5346] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9173bd3-d47c-4dd9-b166-5150b07180f7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5", Pod:"calico-apiserver-5598f4bff7-6jjd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc65ee462", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.774 [INFO][5346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.774 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" iface="eth0" netns="" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.774 [INFO][5346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.774 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.804 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.804 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.804 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.810 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.810 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.812 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:50.817291 containerd[1494]: 2025-09-12 17:34:50.814 [INFO][5346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.819504 containerd[1494]: time="2025-09-12T17:34:50.817334267Z" level=info msg="TearDown network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" successfully" Sep 12 17:34:50.819504 containerd[1494]: time="2025-09-12T17:34:50.817365800Z" level=info msg="StopPodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" returns successfully" Sep 12 17:34:50.819504 containerd[1494]: time="2025-09-12T17:34:50.818012565Z" level=info msg="RemovePodSandbox for \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" Sep 12 17:34:50.819504 containerd[1494]: time="2025-09-12T17:34:50.818061062Z" level=info msg="Forcibly stopping sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\"" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.891 [WARNING][5368] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9173bd3-d47c-4dd9-b166-5150b07180f7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"1e34d8ad1bd6f7dd1adf2cfd5184c5da3e2ef49eaad5ebd243fd71bcc3b9edc5", Pod:"calico-apiserver-5598f4bff7-6jjd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc65ee462", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.891 [INFO][5368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.891 [INFO][5368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" iface="eth0" netns="" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.891 [INFO][5368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.891 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.932 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.933 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.933 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.950 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.950 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" HandleID="k8s-pod-network.a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--6jjd6-eth0" Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.955 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:50.961488 containerd[1494]: 2025-09-12 17:34:50.959 [INFO][5368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498" Sep 12 17:34:50.961488 containerd[1494]: time="2025-09-12T17:34:50.961463357Z" level=info msg="TearDown network for sandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" successfully" Sep 12 17:34:50.965052 containerd[1494]: time="2025-09-12T17:34:50.964991859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:50.965128 containerd[1494]: time="2025-09-12T17:34:50.965104332Z" level=info msg="RemovePodSandbox \"a13e06a6e775dab6e89e86e156e040ca3263cf2bf5fda867e4535b088c458498\" returns successfully" Sep 12 17:34:50.965557 containerd[1494]: time="2025-09-12T17:34:50.965536532Z" level=info msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.035 [WARNING][5390] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddfd8960-9299-46a3-9189-694d4fce081a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473", Pod:"coredns-674b8bbfcf-2drmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie199f4a5e4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.036 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.036 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" iface="eth0" netns="" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.036 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.036 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.070 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.070 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.070 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.079 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.079 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.082 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.087694 containerd[1494]: 2025-09-12 17:34:51.085 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.089380 containerd[1494]: time="2025-09-12T17:34:51.087748135Z" level=info msg="TearDown network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" successfully" Sep 12 17:34:51.089380 containerd[1494]: time="2025-09-12T17:34:51.087780099Z" level=info msg="StopPodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" returns successfully" Sep 12 17:34:51.090001 containerd[1494]: time="2025-09-12T17:34:51.089657495Z" level=info msg="RemovePodSandbox for \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" Sep 12 17:34:51.090001 containerd[1494]: time="2025-09-12T17:34:51.089708987Z" level=info msg="Forcibly stopping sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\"" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.126 [WARNING][5411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddfd8960-9299-46a3-9189-694d4fce081a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"54a6688e9c9087f1849df15ded318c77074f849c22d9aab2d54948ba5c1f4473", Pod:"coredns-674b8bbfcf-2drmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie199f4a5e4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.126 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.126 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" iface="eth0" netns="" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.126 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.126 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.151 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.151 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.151 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.158 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.159 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" HandleID="k8s-pod-network.5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--2drmz-eth0" Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.162 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.169362 containerd[1494]: 2025-09-12 17:34:51.164 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3" Sep 12 17:34:51.169362 containerd[1494]: time="2025-09-12T17:34:51.169310093Z" level=info msg="TearDown network for sandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" successfully" Sep 12 17:34:51.174495 containerd[1494]: time="2025-09-12T17:34:51.173899957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:51.174495 containerd[1494]: time="2025-09-12T17:34:51.173964906Z" level=info msg="RemovePodSandbox \"5acf82221adba023a1a31bcdde14c910c3ef9c774c30f5c6adc06041d8894fd3\" returns successfully" Sep 12 17:34:51.175006 containerd[1494]: time="2025-09-12T17:34:51.174641419Z" level=info msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.214 [WARNING][5432] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0", GenerateName:"calico-kube-controllers-594f5c66f9-", Namespace:"calico-system", SelfLink:"", UID:"5d41cc7d-de46-4428-bd36-c4653b827654", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594f5c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294", Pod:"calico-kube-controllers-594f5c66f9-sz9rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6495e4022c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.214 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.214 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" iface="eth0" netns="" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.214 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.214 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.248 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.248 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.248 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.254 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.254 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.256 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.261091 containerd[1494]: 2025-09-12 17:34:51.258 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.262983 containerd[1494]: time="2025-09-12T17:34:51.261155811Z" level=info msg="TearDown network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" successfully" Sep 12 17:34:51.262983 containerd[1494]: time="2025-09-12T17:34:51.261179487Z" level=info msg="StopPodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" returns successfully" Sep 12 17:34:51.262983 containerd[1494]: time="2025-09-12T17:34:51.262285382Z" level=info msg="RemovePodSandbox for \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" Sep 12 17:34:51.262983 containerd[1494]: time="2025-09-12T17:34:51.262307146Z" level=info msg="Forcibly stopping sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\"" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.303 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0", GenerateName:"calico-kube-controllers-594f5c66f9-", Namespace:"calico-system", SelfLink:"", UID:"5d41cc7d-de46-4428-bd36-c4653b827654", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594f5c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"c665103e9e52daa87cb2014d1f82fb0aef92a7d25cab0ee669a25b0ffc274294", Pod:"calico-kube-controllers-594f5c66f9-sz9rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6495e4022c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.303 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.303 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" iface="eth0" netns="" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.303 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.303 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.334 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.334 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.334 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.344 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.344 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" HandleID="k8s-pod-network.0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--kube--controllers--594f5c66f9--sz9rn-eth0" Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.346 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.352192 containerd[1494]: 2025-09-12 17:34:51.349 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9" Sep 12 17:34:51.352776 containerd[1494]: time="2025-09-12T17:34:51.352261709Z" level=info msg="TearDown network for sandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" successfully" Sep 12 17:34:51.357315 containerd[1494]: time="2025-09-12T17:34:51.357246174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:51.357705 containerd[1494]: time="2025-09-12T17:34:51.357400821Z" level=info msg="RemovePodSandbox \"0817615221d00c728e93ba37bf5c896e58ae96620f42dc1c26c9bf8e1859a1a9\" returns successfully" Sep 12 17:34:51.358027 containerd[1494]: time="2025-09-12T17:34:51.357999149Z" level=info msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.403 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3081acf9-a29d-4b4d-97c7-a33056bd9370", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f", Pod:"goldmane-54d579b49d-5bksp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d349f8a57c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.403 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.403 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" iface="eth0" netns="" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.403 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.403 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.429 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.429 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.430 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.447 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.447 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.451 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.457948 containerd[1494]: 2025-09-12 17:34:51.455 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.458911 containerd[1494]: time="2025-09-12T17:34:51.458288970Z" level=info msg="TearDown network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" successfully" Sep 12 17:34:51.458911 containerd[1494]: time="2025-09-12T17:34:51.458348208Z" level=info msg="StopPodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" returns successfully" Sep 12 17:34:51.460353 containerd[1494]: time="2025-09-12T17:34:51.459815370Z" level=info msg="RemovePodSandbox for \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" Sep 12 17:34:51.460353 containerd[1494]: time="2025-09-12T17:34:51.459848466Z" level=info msg="Forcibly stopping sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\"" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.539 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3081acf9-a29d-4b4d-97c7-a33056bd9370", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"7f005c632f0f636b01dec7fdc8e27c47bfd7d8c78706beaf05d8bc5bc4cbd71f", Pod:"goldmane-54d579b49d-5bksp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d349f8a57c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.539 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.539 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" iface="eth0" netns="" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.539 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.539 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.565 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.565 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.565 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.570 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.570 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" HandleID="k8s-pod-network.25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Workload="ci--4081--3--6--2--340685d2b8-k8s-goldmane--54d579b49d--5bksp-eth0" Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.571 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.577278 containerd[1494]: 2025-09-12 17:34:51.574 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49" Sep 12 17:34:51.577278 containerd[1494]: time="2025-09-12T17:34:51.576735498Z" level=info msg="TearDown network for sandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" successfully" Sep 12 17:34:51.582145 containerd[1494]: time="2025-09-12T17:34:51.581902949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:51.582145 containerd[1494]: time="2025-09-12T17:34:51.581967807Z" level=info msg="RemovePodSandbox \"25dbac092b2e9421fb00011e2719ced307331ae116f30f7552c5ebe1d41fcd49\" returns successfully" Sep 12 17:34:51.582851 containerd[1494]: time="2025-09-12T17:34:51.582608148Z" level=info msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.622 [WARNING][5518] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.622 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.622 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" iface="eth0" netns="" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.622 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.623 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.650 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.650 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.650 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.657 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.657 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.658 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.663751 containerd[1494]: 2025-09-12 17:34:51.661 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.664578 containerd[1494]: time="2025-09-12T17:34:51.664292350Z" level=info msg="TearDown network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" successfully" Sep 12 17:34:51.664578 containerd[1494]: time="2025-09-12T17:34:51.664336077Z" level=info msg="StopPodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" returns successfully" Sep 12 17:34:51.665089 containerd[1494]: time="2025-09-12T17:34:51.665039324Z" level=info msg="RemovePodSandbox for \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" Sep 12 17:34:51.665089 containerd[1494]: time="2025-09-12T17:34:51.665087259Z" level=info msg="Forcibly stopping sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\"" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.735 [WARNING][5539] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" WorkloadEndpoint="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.735 [INFO][5539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.735 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" iface="eth0" netns="" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.735 [INFO][5539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.735 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.770 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.771 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.771 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.778 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.778 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" HandleID="k8s-pod-network.dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Workload="ci--4081--3--6--2--340685d2b8-k8s-whisker--5d989cb468--c9j2k-eth0" Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.779 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.785823 containerd[1494]: 2025-09-12 17:34:51.782 [INFO][5539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8" Sep 12 17:34:51.787628 containerd[1494]: time="2025-09-12T17:34:51.785928921Z" level=info msg="TearDown network for sandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" successfully" Sep 12 17:34:51.791841 containerd[1494]: time="2025-09-12T17:34:51.791789808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:51.791983 containerd[1494]: time="2025-09-12T17:34:51.791856210Z" level=info msg="RemovePodSandbox \"dc5e874621f859648d8da6dac376b1b4028473d47fd4e43dd84488c158fb09b8\" returns successfully" Sep 12 17:34:51.792879 containerd[1494]: time="2025-09-12T17:34:51.792680065Z" level=info msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.835 [WARNING][5565] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe8c226f-3398-4b35-97ba-3a5b1ba242b3", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96", Pod:"coredns-674b8bbfcf-524kk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0865a0b2c84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.835 [INFO][5565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.835 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" iface="eth0" netns="" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.835 [INFO][5565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.835 [INFO][5565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.867 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.868 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.868 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.875 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.875 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.878 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.884255 containerd[1494]: 2025-09-12 17:34:51.881 [INFO][5565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.886159 containerd[1494]: time="2025-09-12T17:34:51.884712634Z" level=info msg="TearDown network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" successfully" Sep 12 17:34:51.886159 containerd[1494]: time="2025-09-12T17:34:51.884739516Z" level=info msg="StopPodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" returns successfully" Sep 12 17:34:51.886159 containerd[1494]: time="2025-09-12T17:34:51.885205672Z" level=info msg="RemovePodSandbox for \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" Sep 12 17:34:51.886159 containerd[1494]: time="2025-09-12T17:34:51.885318696Z" level=info msg="Forcibly stopping sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\"" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.934 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe8c226f-3398-4b35-97ba-3a5b1ba242b3", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"889e2d760c32e95eb68ee8fed4f26af6274d7a1b7e64cc177995daa03057ff96", Pod:"coredns-674b8bbfcf-524kk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0865a0b2c84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.935 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.935 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" iface="eth0" netns="" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.935 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.935 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.968 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.968 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.968 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.975 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.975 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" HandleID="k8s-pod-network.94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Workload="ci--4081--3--6--2--340685d2b8-k8s-coredns--674b8bbfcf--524kk-eth0" Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.977 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:51.982438 containerd[1494]: 2025-09-12 17:34:51.979 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10" Sep 12 17:34:51.982438 containerd[1494]: time="2025-09-12T17:34:51.982360418Z" level=info msg="TearDown network for sandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" successfully" Sep 12 17:34:51.987283 containerd[1494]: time="2025-09-12T17:34:51.987241339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:51.988220 containerd[1494]: time="2025-09-12T17:34:51.988201415Z" level=info msg="RemovePodSandbox \"94c9e7694afea3a8802728d03eeb6937b6637f890c3d9ecd1a7528c085a8ff10\" returns successfully" Sep 12 17:34:51.988991 containerd[1494]: time="2025-09-12T17:34:51.988697610Z" level=info msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.048 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"740d955b-2103-436a-891e-3472e5de0fd4", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087", Pod:"calico-apiserver-5598f4bff7-rqkr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13d2313f9d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.049 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.049 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" iface="eth0" netns="" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.049 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.049 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.086 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.086 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.086 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.092 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.092 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.094 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:52.098448 containerd[1494]: 2025-09-12 17:34:52.096 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.100776 containerd[1494]: time="2025-09-12T17:34:52.098523700Z" level=info msg="TearDown network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" successfully" Sep 12 17:34:52.100776 containerd[1494]: time="2025-09-12T17:34:52.098548560Z" level=info msg="StopPodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" returns successfully" Sep 12 17:34:52.100776 containerd[1494]: time="2025-09-12T17:34:52.099069674Z" level=info msg="RemovePodSandbox for \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" Sep 12 17:34:52.100776 containerd[1494]: time="2025-09-12T17:34:52.099090875Z" level=info msg="Forcibly stopping sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\"" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.142 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0", GenerateName:"calico-apiserver-5598f4bff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"740d955b-2103-436a-891e-3472e5de0fd4", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5598f4bff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-2-340685d2b8", ContainerID:"9b4ffa63f5d21407402f881346a186dead690a6c90738ab73291307cd7250087", Pod:"calico-apiserver-5598f4bff7-rqkr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13d2313f9d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.142 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.142 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" iface="eth0" netns="" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.142 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.142 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.166 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.166 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.166 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.173 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.174 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" HandleID="k8s-pod-network.45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Workload="ci--4081--3--6--2--340685d2b8-k8s-calico--apiserver--5598f4bff7--rqkr4-eth0" Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.176 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:52.182711 containerd[1494]: 2025-09-12 17:34:52.178 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5" Sep 12 17:34:52.183908 containerd[1494]: time="2025-09-12T17:34:52.183847213Z" level=info msg="TearDown network for sandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" successfully" Sep 12 17:34:52.213246 containerd[1494]: time="2025-09-12T17:34:52.211786949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:52.213246 containerd[1494]: time="2025-09-12T17:34:52.211904913Z" level=info msg="RemovePodSandbox \"45d45a3824b7db8abb7a912531ff51c047426a015f1f74c81ce43e3076a465e5\" returns successfully" Sep 12 17:34:59.557802 kubelet[2565]: I0912 17:34:59.557731 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:01.295525 kubelet[2565]: I0912 17:35:01.295074 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:04.457828 systemd[1]: run-containerd-runc-k8s.io-debb573418ad352cf264e371b67b882cb2a25a64b3be2cc69c32a0c34bc8b581-runc.ecOUE3.mount: Deactivated successfully. Sep 12 17:35:26.250861 systemd[1]: run-containerd-runc-k8s.io-539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4-runc.V4RW1X.mount: Deactivated successfully. Sep 12 17:35:27.173598 systemd[1]: Started sshd@8-135.181.98.85:22-147.75.109.163:34642.service - OpenSSH per-connection server daemon (147.75.109.163:34642). Sep 12 17:35:28.190165 sshd[5768]: Accepted publickey for core from 147.75.109.163 port 34642 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:28.191973 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:28.202113 systemd-logind[1475]: New session 8 of user core. Sep 12 17:35:28.204249 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:29.488173 sshd[5768]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:29.496567 systemd[1]: sshd@8-135.181.98.85:22-147.75.109.163:34642.service: Deactivated successfully. Sep 12 17:35:29.498845 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:29.499437 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:29.501788 systemd-logind[1475]: Removed session 8. Sep 12 17:35:34.688539 systemd[1]: Started sshd@9-135.181.98.85:22-147.75.109.163:41690.service - OpenSSH per-connection server daemon (147.75.109.163:41690). Sep 12 17:35:35.793011 sshd[5782]: Accepted publickey for core from 147.75.109.163 port 41690 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:35.794629 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:35.799333 systemd-logind[1475]: New session 9 of user core. Sep 12 17:35:35.806360 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:36.796135 sshd[5782]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:36.799720 systemd[1]: sshd@9-135.181.98.85:22-147.75.109.163:41690.service: Deactivated successfully. Sep 12 17:35:36.801546 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:36.802917 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:36.803808 systemd-logind[1475]: Removed session 9. Sep 12 17:35:36.986532 systemd[1]: Started sshd@10-135.181.98.85:22-147.75.109.163:41692.service - OpenSSH per-connection server daemon (147.75.109.163:41692). Sep 12 17:35:38.079408 sshd[5815]: Accepted publickey for core from 147.75.109.163 port 41692 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:38.081137 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:38.085810 systemd-logind[1475]: New session 10 of user core. Sep 12 17:35:38.088307 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:35:38.944695 sshd[5815]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:38.948251 systemd[1]: sshd@10-135.181.98.85:22-147.75.109.163:41692.service: Deactivated successfully. Sep 12 17:35:38.950211 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:35:38.952029 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:35:38.953303 systemd-logind[1475]: Removed session 10. Sep 12 17:35:39.097348 systemd[1]: Started sshd@11-135.181.98.85:22-147.75.109.163:41702.service - OpenSSH per-connection server daemon (147.75.109.163:41702). Sep 12 17:35:40.091504 sshd[5825]: Accepted publickey for core from 147.75.109.163 port 41702 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:40.093025 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:40.097851 systemd-logind[1475]: New session 11 of user core. Sep 12 17:35:40.101374 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:35:40.871267 sshd[5825]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:40.876412 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:35:40.876649 systemd[1]: sshd@11-135.181.98.85:22-147.75.109.163:41702.service: Deactivated successfully. Sep 12 17:35:40.878371 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:35:40.879210 systemd-logind[1475]: Removed session 11. Sep 12 17:35:46.042814 systemd[1]: Started sshd@12-135.181.98.85:22-147.75.109.163:41044.service - OpenSSH per-connection server daemon (147.75.109.163:41044). Sep 12 17:35:47.056115 sshd[5880]: Accepted publickey for core from 147.75.109.163 port 41044 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:47.058100 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:47.064053 systemd-logind[1475]: New session 12 of user core. Sep 12 17:35:47.066287 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:35:47.852912 sshd[5880]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:47.856492 systemd[1]: sshd@12-135.181.98.85:22-147.75.109.163:41044.service: Deactivated successfully. Sep 12 17:35:47.858044 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:35:47.858958 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:35:47.859947 systemd-logind[1475]: Removed session 12. Sep 12 17:35:53.057610 systemd[1]: Started sshd@13-135.181.98.85:22-147.75.109.163:54978.service - OpenSSH per-connection server daemon (147.75.109.163:54978). Sep 12 17:35:54.155469 sshd[5901]: Accepted publickey for core from 147.75.109.163 port 54978 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:54.157066 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:54.161201 systemd-logind[1475]: New session 13 of user core. Sep 12 17:35:54.168282 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:35:55.037281 sshd[5901]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:55.040997 systemd[1]: sshd@13-135.181.98.85:22-147.75.109.163:54978.service: Deactivated successfully. Sep 12 17:35:55.043087 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:35:55.044698 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:35:55.046020 systemd-logind[1475]: Removed session 13. Sep 12 17:36:00.225221 systemd[1]: Started sshd@14-135.181.98.85:22-147.75.109.163:57074.service - OpenSSH per-connection server daemon (147.75.109.163:57074). Sep 12 17:36:01.310906 sshd[5934]: Accepted publickey for core from 147.75.109.163 port 57074 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:01.313906 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:01.319729 systemd-logind[1475]: New session 14 of user core. Sep 12 17:36:01.326377 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:36:02.173717 sshd[5934]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:02.178826 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:36:02.179653 systemd[1]: sshd@14-135.181.98.85:22-147.75.109.163:57074.service: Deactivated successfully. Sep 12 17:36:02.182329 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:36:02.184342 systemd-logind[1475]: Removed session 14. Sep 12 17:36:02.358397 systemd[1]: Started sshd@15-135.181.98.85:22-147.75.109.163:57090.service - OpenSSH per-connection server daemon (147.75.109.163:57090). Sep 12 17:36:03.454242 sshd[5947]: Accepted publickey for core from 147.75.109.163 port 57090 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:03.455668 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:03.459610 systemd-logind[1475]: New session 15 of user core. Sep 12 17:36:03.467330 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:36:04.400432 sshd[5947]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:04.410829 systemd[1]: sshd@15-135.181.98.85:22-147.75.109.163:57090.service: Deactivated successfully. Sep 12 17:36:04.416627 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:36:04.421729 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:36:04.422893 systemd-logind[1475]: Removed session 15. Sep 12 17:36:04.578690 systemd[1]: Started sshd@16-135.181.98.85:22-147.75.109.163:57092.service - OpenSSH per-connection server daemon (147.75.109.163:57092). Sep 12 17:36:05.675323 sshd[5979]: Accepted publickey for core from 147.75.109.163 port 57092 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:05.676727 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:05.681198 systemd-logind[1475]: New session 16 of user core. Sep 12 17:36:05.686368 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:07.240777 sshd[5979]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:07.247985 systemd[1]: sshd@16-135.181.98.85:22-147.75.109.163:57092.service: Deactivated successfully. Sep 12 17:36:07.250283 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:07.252033 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:07.254224 systemd-logind[1475]: Removed session 16. Sep 12 17:36:07.392625 systemd[1]: Started sshd@17-135.181.98.85:22-147.75.109.163:57098.service - OpenSSH per-connection server daemon (147.75.109.163:57098). Sep 12 17:36:08.389853 sshd[6026]: Accepted publickey for core from 147.75.109.163 port 57098 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:08.391695 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:08.395827 systemd-logind[1475]: New session 17 of user core. Sep 12 17:36:08.400251 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:09.726370 sshd[6026]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:09.730037 systemd[1]: sshd@17-135.181.98.85:22-147.75.109.163:57098.service: Deactivated successfully. Sep 12 17:36:09.732187 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:09.733176 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:09.733994 systemd-logind[1475]: Removed session 17. Sep 12 17:36:09.926463 systemd[1]: Started sshd@18-135.181.98.85:22-147.75.109.163:57108.service - OpenSSH per-connection server daemon (147.75.109.163:57108). Sep 12 17:36:11.028898 sshd[6037]: Accepted publickey for core from 147.75.109.163 port 57108 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:11.030245 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:11.034302 systemd-logind[1475]: New session 18 of user core. Sep 12 17:36:11.038276 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:11.863589 sshd[6037]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:11.873668 systemd[1]: sshd@18-135.181.98.85:22-147.75.109.163:57108.service: Deactivated successfully. Sep 12 17:36:11.874263 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:11.879683 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:11.885207 systemd-logind[1475]: Removed session 18. Sep 12 17:36:14.447057 systemd[1]: run-containerd-runc-k8s.io-539da58930056c92733ddcf3845b12b4519111182470a553d9d112d48e5315a4-runc.fhzKeI.mount: Deactivated successfully. Sep 12 17:36:17.050944 systemd[1]: Started sshd@19-135.181.98.85:22-147.75.109.163:32832.service - OpenSSH per-connection server daemon (147.75.109.163:32832). Sep 12 17:36:18.173671 sshd[6090]: Accepted publickey for core from 147.75.109.163 port 32832 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:18.174763 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:18.180524 systemd-logind[1475]: New session 19 of user core. Sep 12 17:36:18.184967 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:19.262658 sshd[6090]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:19.265768 systemd[1]: sshd@19-135.181.98.85:22-147.75.109.163:32832.service: Deactivated successfully. Sep 12 17:36:19.267657 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:19.268281 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:19.269110 systemd-logind[1475]: Removed session 19. Sep 12 17:36:34.065622 systemd[1]: cri-containerd-11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d.scope: Deactivated successfully. Sep 12 17:36:34.066209 systemd[1]: cri-containerd-11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d.scope: Consumed 15.239s CPU time. Sep 12 17:36:34.326308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d-rootfs.mount: Deactivated successfully. Sep 12 17:36:34.399352 containerd[1494]: time="2025-09-12T17:36:34.363017222Z" level=info msg="shim disconnected" id=11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d namespace=k8s.io Sep 12 17:36:34.399352 containerd[1494]: time="2025-09-12T17:36:34.399348115Z" level=warning msg="cleaning up after shim disconnected" id=11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d namespace=k8s.io Sep 12 17:36:34.399352 containerd[1494]: time="2025-09-12T17:36:34.399365198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:34.530317 kubelet[2565]: E0912 17:36:34.530240 2565 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56336->10.0.0.2:2379: read: connection timed out" Sep 12 17:36:35.023145 kubelet[2565]: I0912 17:36:35.023085 2565 scope.go:117] "RemoveContainer" containerID="11101ec797b8dbfb9450d9d15b563eccb47c9e41046b46b94a627d56d2b4789d" Sep 12 17:36:35.114222 containerd[1494]: time="2025-09-12T17:36:35.114158702Z" level=info msg="CreateContainer within sandbox \"e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:36:35.258229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521676564.mount: Deactivated successfully. Sep 12 17:36:35.283886 containerd[1494]: time="2025-09-12T17:36:35.283388474Z" level=info msg="CreateContainer within sandbox \"e9c142fdd1ed5b4bcefbbc06a0dd4ea832a1d86837b5c01a65dc822115df1657\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2d9e7fc369aa07aef6edabc5e5b45d1493c74561434ac5f43b385341e5e2afd6\"" Sep 12 17:36:35.287889 containerd[1494]: time="2025-09-12T17:36:35.287843117Z" level=info msg="StartContainer for \"2d9e7fc369aa07aef6edabc5e5b45d1493c74561434ac5f43b385341e5e2afd6\"" Sep 12 17:36:35.339426 systemd[1]: Started cri-containerd-2d9e7fc369aa07aef6edabc5e5b45d1493c74561434ac5f43b385341e5e2afd6.scope - libcontainer container 2d9e7fc369aa07aef6edabc5e5b45d1493c74561434ac5f43b385341e5e2afd6. Sep 12 17:36:35.377167 containerd[1494]: time="2025-09-12T17:36:35.376770186Z" level=info msg="StartContainer for \"2d9e7fc369aa07aef6edabc5e5b45d1493c74561434ac5f43b385341e5e2afd6\" returns successfully" Sep 12 17:36:35.466001 kubelet[2565]: E0912 17:36:35.447332 2565 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59470->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-2-340685d2b8.1864998d38259bb8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-2-340685d2b8,UID:d05c13e438a231f9a5807f8e4f1ba738,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-2-340685d2b8,},FirstTimestamp:2025-09-12 17:36:26.39276332 +0000 UTC m=+156.644282146,LastTimestamp:2025-09-12 17:36:26.39276332 +0000 UTC m=+156.644282146,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-2-340685d2b8,}" Sep 12 17:36:35.671364 systemd[1]: cri-containerd-5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588.scope: Deactivated successfully. Sep 12 17:36:35.672268 systemd[1]: cri-containerd-5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588.scope: Consumed 4.576s CPU time, 27.5M memory peak, 0B memory swap peak. Sep 12 17:36:35.731446 containerd[1494]: time="2025-09-12T17:36:35.731094622Z" level=info msg="shim disconnected" id=5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588 namespace=k8s.io Sep 12 17:36:35.731446 containerd[1494]: time="2025-09-12T17:36:35.731176692Z" level=warning msg="cleaning up after shim disconnected" id=5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588 namespace=k8s.io Sep 12 17:36:35.731446 containerd[1494]: time="2025-09-12T17:36:35.731187803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:35.734102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588-rootfs.mount: Deactivated successfully. Sep 12 17:36:36.023516 kubelet[2565]: I0912 17:36:36.023470 2565 scope.go:117] "RemoveContainer" containerID="5ae4292d97a93b40ab698c6090e9518a96f6fb2c6a108bda782511baeb151588" Sep 12 17:36:36.026353 containerd[1494]: time="2025-09-12T17:36:36.025749512Z" level=info msg="CreateContainer within sandbox \"2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:36:36.042913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242063158.mount: Deactivated successfully. Sep 12 17:36:36.045706 containerd[1494]: time="2025-09-12T17:36:36.045652672Z" level=info msg="CreateContainer within sandbox \"2f58b6308691c82952daa4663e4d7f5e3d25d159f9d1baa43f4247a8876cfdde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0072892a44d55c84cbc4c41d40495efbfff9f5e9f3fcdcd9cf612ffa2ce36910\"" Sep 12 17:36:36.046456 containerd[1494]: time="2025-09-12T17:36:36.046427586Z" level=info msg="StartContainer for \"0072892a44d55c84cbc4c41d40495efbfff9f5e9f3fcdcd9cf612ffa2ce36910\"" Sep 12 17:36:36.074285 systemd[1]: Started cri-containerd-0072892a44d55c84cbc4c41d40495efbfff9f5e9f3fcdcd9cf612ffa2ce36910.scope - libcontainer container 0072892a44d55c84cbc4c41d40495efbfff9f5e9f3fcdcd9cf612ffa2ce36910. Sep 12 17:36:36.114920 containerd[1494]: time="2025-09-12T17:36:36.114822880Z" level=info msg="StartContainer for \"0072892a44d55c84cbc4c41d40495efbfff9f5e9f3fcdcd9cf612ffa2ce36910\" returns successfully" Sep 12 17:36:36.334909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560955660.mount: Deactivated successfully. Sep 12 17:36:40.496921 systemd[1]: cri-containerd-035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1.scope: Deactivated successfully. Sep 12 17:36:40.497181 systemd[1]: cri-containerd-035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1.scope: Consumed 1.883s CPU time, 21.8M memory peak, 0B memory swap peak. Sep 12 17:36:40.521871 containerd[1494]: time="2025-09-12T17:36:40.521808958Z" level=info msg="shim disconnected" id=035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1 namespace=k8s.io Sep 12 17:36:40.521871 containerd[1494]: time="2025-09-12T17:36:40.521860499Z" level=warning msg="cleaning up after shim disconnected" id=035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1 namespace=k8s.io Sep 12 17:36:40.521871 containerd[1494]: time="2025-09-12T17:36:40.521867833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:40.523515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1-rootfs.mount: Deactivated successfully. Sep 12 17:36:41.048756 kubelet[2565]: I0912 17:36:41.048701 2565 scope.go:117] "RemoveContainer" containerID="035baac00b79027e72295af3ddff2b7c5070fbae181921c4d976e6ad405561f1" Sep 12 17:36:41.051480 containerd[1494]: time="2025-09-12T17:36:41.051439120Z" level=info msg="CreateContainer within sandbox \"8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:36:41.063300 containerd[1494]: time="2025-09-12T17:36:41.062089536Z" level=info msg="CreateContainer within sandbox \"8e4e84ccd2dbfadc4c9a8077039e01998cce0daf4719e15e6cab672d9ce1cea2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1d25cbd810bccd8c00a153ab7df511e102d8fbd947ccfc3ea492d1d4cc75de44\"" Sep 12 17:36:41.063753 containerd[1494]: time="2025-09-12T17:36:41.063722967Z" level=info msg="StartContainer for \"1d25cbd810bccd8c00a153ab7df511e102d8fbd947ccfc3ea492d1d4cc75de44\"" Sep 12 17:36:41.103380 systemd[1]: Started cri-containerd-1d25cbd810bccd8c00a153ab7df511e102d8fbd947ccfc3ea492d1d4cc75de44.scope - libcontainer container 1d25cbd810bccd8c00a153ab7df511e102d8fbd947ccfc3ea492d1d4cc75de44. Sep 12 17:36:41.149416 containerd[1494]: time="2025-09-12T17:36:41.149364323Z" level=info msg="StartContainer for \"1d25cbd810bccd8c00a153ab7df511e102d8fbd947ccfc3ea492d1d4cc75de44\" returns successfully"