Sep 12 17:32:53.864268 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:32:53.864290 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:53.864298 kernel: BIOS-provided physical RAM map: Sep 12 17:32:53.864303 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:32:53.864308 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:32:53.864313 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:32:53.864318 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 12 17:32:53.864323 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 12 17:32:53.864329 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:32:53.864334 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:32:53.864339 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:32:53.864344 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:32:53.864349 kernel: NX (Execute Disable) protection: active Sep 12 17:32:53.864354 kernel: APIC: Static calls initialized Sep 12 17:32:53.864361 kernel: SMBIOS 2.8 present. Sep 12 17:32:53.864367 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 12 17:32:53.864372 kernel: Hypervisor detected: KVM Sep 12 17:32:53.864377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:32:53.864382 kernel: kvm-clock: using sched offset of 3110128651 cycles Sep 12 17:32:53.864388 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:32:53.864393 kernel: tsc: Detected 2445.404 MHz processor Sep 12 17:32:53.864399 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:32:53.864405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:32:53.864412 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 12 17:32:53.864417 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:32:53.864422 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:32:53.864428 kernel: Using GB pages for direct mapping Sep 12 17:32:53.864433 kernel: ACPI: Early table checksum verification disabled Sep 12 17:32:53.864438 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 12 17:32:53.864444 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864449 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864455 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864461 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 12 17:32:53.864467 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864472 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864478 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864483 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:32:53.864488 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 12 17:32:53.864494 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 12 17:32:53.864499 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 12 17:32:53.864508 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 12 17:32:53.864514 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 12 17:32:53.864520 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 12 17:32:53.864525 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 12 17:32:53.864531 kernel: No NUMA configuration found Sep 12 17:32:53.864537 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 12 17:32:53.864544 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 12 17:32:53.864549 kernel: Zone ranges: Sep 12 17:32:53.864555 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:32:53.864561 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 12 17:32:53.864567 kernel: Normal empty Sep 12 17:32:53.864572 kernel: Movable zone start for each node Sep 12 17:32:53.864578 kernel: Early memory node ranges Sep 12 17:32:53.864583 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:32:53.864589 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 12 17:32:53.864595 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 12 17:32:53.864602 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:32:53.864607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:32:53.864613 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:32:53.864619 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:32:53.864625 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:32:53.864630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:32:53.864636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:32:53.864642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:32:53.864648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:32:53.864655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:32:53.864661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:32:53.864666 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:32:53.864672 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:32:53.864678 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:32:53.864684 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:32:53.864689 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:32:53.864695 kernel: Booting paravirtualized kernel on KVM Sep 12 17:32:53.864701 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:32:53.864723 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:32:53.864729 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:32:53.864735 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:32:53.864741 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:32:53.864746 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:32:53.864753 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:53.864760 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:32:53.864765 kernel: random: crng init done Sep 12 17:32:53.864773 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:32:53.864778 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:32:53.864784 kernel: Fallback order for Node 0: 0 Sep 12 17:32:53.864790 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 12 17:32:53.864796 kernel: Policy zone: DMA32 Sep 12 17:32:53.864801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:32:53.864807 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:32:53.864813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:32:53.864819 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:32:53.864826 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:32:53.864832 kernel: Dynamic Preempt: voluntary Sep 12 17:32:53.864838 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:32:53.864844 kernel: rcu: RCU event tracing is enabled. Sep 12 17:32:53.864850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:32:53.864856 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:32:53.864862 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:32:53.864868 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:32:53.864874 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:32:53.864880 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:32:53.864887 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:32:53.864893 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:32:53.864899 kernel: Console: colour VGA+ 80x25 Sep 12 17:32:53.864904 kernel: printk: console [tty0] enabled Sep 12 17:32:53.864910 kernel: printk: console [ttyS0] enabled Sep 12 17:32:53.864916 kernel: ACPI: Core revision 20230628 Sep 12 17:32:53.864922 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:32:53.864928 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:32:53.864934 kernel: x2apic enabled Sep 12 17:32:53.864941 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:32:53.864947 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:32:53.864953 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:32:53.864959 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Sep 12 17:32:53.864964 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:32:53.864970 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:32:53.864975 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:32:53.864980 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:32:53.864992 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:32:53.864998 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:32:53.865004 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:32:53.865011 kernel: active return thunk: retbleed_return_thunk Sep 12 17:32:53.865017 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:32:53.865022 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:32:53.865028 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:32:53.865034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:32:53.865040 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:32:53.865047 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:32:53.865052 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:32:53.865058 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:32:53.865064 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:32:53.865069 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:32:53.865075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:32:53.865081 kernel: landlock: Up and running. Sep 12 17:32:53.865087 kernel: SELinux: Initializing. Sep 12 17:32:53.865094 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:32:53.865099 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:32:53.865105 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:32:53.865111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:53.865117 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:53.865123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:53.865128 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:32:53.865134 kernel: ... version: 0 Sep 12 17:32:53.865139 kernel: ... bit width: 48 Sep 12 17:32:53.867533 kernel: ... generic registers: 6 Sep 12 17:32:53.867548 kernel: ... value mask: 0000ffffffffffff Sep 12 17:32:53.867560 kernel: ... max period: 00007fffffffffff Sep 12 17:32:53.867572 kernel: ... fixed-purpose events: 0 Sep 12 17:32:53.867582 kernel: ... event mask: 000000000000003f Sep 12 17:32:53.867593 kernel: signal: max sigframe size: 1776 Sep 12 17:32:53.867603 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:32:53.867614 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:32:53.867625 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:32:53.867642 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:32:53.867653 kernel: .... node #0, CPUs: #1 Sep 12 17:32:53.867664 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:32:53.867675 kernel: smpboot: Max logical packages: 1 Sep 12 17:32:53.867686 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Sep 12 17:32:53.867697 kernel: devtmpfs: initialized Sep 12 17:32:53.867748 kernel: x86/mm: Memory block size: 128MB Sep 12 17:32:53.867762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:32:53.867773 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:32:53.867790 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:32:53.867800 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:32:53.867810 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:32:53.867820 kernel: audit: type=2000 audit(1757698372.634:1): state=initialized audit_enabled=0 res=1 Sep 12 17:32:53.867830 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:32:53.867840 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:32:53.867851 kernel: cpuidle: using governor menu Sep 12 17:32:53.867862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:32:53.867887 kernel: dca service started, version 1.12.1 Sep 12 17:32:53.867902 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:32:53.867913 kernel: PCI: Using configuration type 1 for base access Sep 12 17:32:53.867919 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:32:53.867925 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:32:53.867931 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:32:53.867937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:32:53.867943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:32:53.867948 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:32:53.867954 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:32:53.867962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:32:53.867967 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:32:53.867977 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:32:53.867988 kernel: ACPI: Interpreter enabled Sep 12 17:32:53.868000 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:32:53.868010 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:32:53.868021 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:32:53.868031 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:32:53.868041 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:32:53.868054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:32:53.868295 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:32:53.868420 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:32:53.868529 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:32:53.868546 kernel: PCI host bridge to bus 0000:00 Sep 12 17:32:53.868660 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:32:53.868781 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:32:53.868894 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:32:53.868998 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 12 17:32:53.869098 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:32:53.869234 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:32:53.869301 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:32:53.869385 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:32:53.869467 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:32:53.869533 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 12 17:32:53.869595 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 12 17:32:53.869658 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 12 17:32:53.869737 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 12 17:32:53.869804 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:32:53.869876 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.869946 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 12 17:32:53.870016 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.870079 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 12 17:32:53.870167 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871268 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 12 17:32:53.871347 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871422 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 12 17:32:53.871496 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871562 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 12 17:32:53.871631 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871694 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 12 17:32:53.871787 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871858 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 12 17:32:53.871932 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.871996 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 12 17:32:53.872066 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:32:53.872129 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 12 17:32:53.873941 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:32:53.874020 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:32:53.874091 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:32:53.874190 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 12 17:32:53.874258 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 12 17:32:53.874327 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:32:53.874390 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:32:53.874462 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:32:53.874533 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 12 17:32:53.874598 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 12 17:32:53.874661 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 12 17:32:53.874738 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:32:53.874803 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 17:32:53.874865 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:32:53.874938 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 17:32:53.875010 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 12 17:32:53.875073 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:32:53.875136 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 17:32:53.875226 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:32:53.875299 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 17:32:53.875363 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 12 17:32:53.875432 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 12 17:32:53.875493 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:32:53.875553 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 17:32:53.875615 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:32:53.875690 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 17:32:53.875770 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 12 17:32:53.875834 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:32:53.875901 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 17:32:53.875963 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:32:53.876033 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 17:32:53.876097 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 12 17:32:53.876433 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:32:53.876507 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 17:32:53.876570 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:32:53.876645 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 17:32:53.876734 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 12 17:32:53.876803 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 12 17:32:53.876867 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:32:53.876928 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 17:32:53.876990 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:32:53.876999 kernel: acpiphp: Slot [0] registered Sep 12 17:32:53.877074 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:32:53.877837 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 12 17:32:53.877931 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 12 17:32:53.878000 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 12 17:32:53.878066 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:32:53.878130 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 17:32:53.878231 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:32:53.878241 kernel: acpiphp: Slot [0-2] registered Sep 12 17:32:53.878321 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:32:53.878440 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 12 17:32:53.878514 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:32:53.878522 kernel: acpiphp: Slot [0-3] registered Sep 12 17:32:53.878586 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:32:53.878648 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 17:32:53.878724 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:32:53.878734 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:32:53.878740 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:32:53.878749 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:32:53.878755 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:32:53.878761 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:32:53.878767 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:32:53.878772 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:32:53.878778 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:32:53.878784 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:32:53.878790 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:32:53.878795 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:32:53.878803 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:32:53.878808 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:32:53.878814 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:32:53.878820 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:32:53.878826 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:32:53.878832 kernel: iommu: Default domain type: Translated Sep 12 17:32:53.878837 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:32:53.878843 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:32:53.878849 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:32:53.878854 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:32:53.878862 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 12 17:32:53.878929 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:32:53.878991 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:32:53.879051 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:32:53.879060 kernel: vgaarb: loaded Sep 12 17:32:53.879066 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:32:53.879072 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:32:53.879077 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:32:53.879087 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:32:53.879093 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:32:53.879099 kernel: pnp: PnP ACPI init Sep 12 17:32:53.880874 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:32:53.880888 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:32:53.880895 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:32:53.880901 kernel: NET: Registered PF_INET protocol family Sep 12 17:32:53.880907 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:32:53.880917 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:32:53.880924 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:32:53.880930 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:32:53.880935 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:32:53.880941 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:32:53.880947 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:32:53.880953 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:32:53.880959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:32:53.880965 kernel: NET: Registered PF_XDP protocol family Sep 12 17:32:53.881036 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:32:53.881102 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:32:53.881254 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:32:53.881320 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 17:32:53.881382 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 17:32:53.881443 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 17:32:53.881503 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:32:53.881569 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 17:32:53.881631 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:32:53.881691 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:32:53.881783 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 17:32:53.881868 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:32:53.881934 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:32:53.881996 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 17:32:53.882057 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:32:53.882123 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:32:53.882225 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 17:32:53.882290 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:32:53.882351 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:32:53.882412 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 17:32:53.882473 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:32:53.882534 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:32:53.882601 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 17:32:53.882675 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:32:53.882778 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:32:53.882876 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 12 17:32:53.882966 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 17:32:53.883057 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:32:53.883948 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:32:53.884074 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 12 17:32:53.884201 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 12 17:32:53.884307 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:32:53.884379 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:32:53.884448 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 12 17:32:53.884510 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 17:32:53.884571 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:32:53.884639 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:32:53.884695 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:32:53.884773 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:32:53.884829 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 12 17:32:53.884883 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:32:53.884937 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:32:53.885007 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 12 17:32:53.885066 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 12 17:32:53.885131 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 12 17:32:53.885322 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 17:32:53.885394 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 12 17:32:53.885452 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 17:32:53.885520 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 12 17:32:53.885578 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 17:32:53.885641 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 12 17:32:53.885698 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 17:32:53.885779 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 12 17:32:53.885837 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 17:32:53.885903 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 12 17:32:53.885965 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 12 17:32:53.886020 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 17:32:53.886086 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 12 17:32:53.886156 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 12 17:32:53.886220 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 17:32:53.886284 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 12 17:32:53.886347 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 12 17:32:53.886403 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 17:32:53.886413 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:32:53.886420 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:32:53.886426 kernel: Initialise system trusted keyrings Sep 12 17:32:53.886433 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:32:53.886439 kernel: Key type asymmetric registered Sep 12 17:32:53.886445 kernel: Asymmetric key parser 'x509' registered Sep 12 17:32:53.886451 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:32:53.886460 kernel: io scheduler mq-deadline registered Sep 12 17:32:53.886466 kernel: io scheduler kyber registered Sep 12 17:32:53.886472 kernel: io scheduler bfq registered Sep 12 17:32:53.886536 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 12 17:32:53.886599 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 12 17:32:53.886661 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 12 17:32:53.886736 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 12 17:32:53.886802 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 12 17:32:53.886870 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 12 17:32:53.886932 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 12 17:32:53.886995 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 12 17:32:53.887057 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 12 17:32:53.887118 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 12 17:32:53.887214 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 12 17:32:53.887280 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 12 17:32:53.887790 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 12 17:32:53.887861 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 12 17:32:53.887931 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 12 17:32:53.887994 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 12 17:32:53.888004 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:32:53.888066 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 12 17:32:53.888129 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 12 17:32:53.888139 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:32:53.888291 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 12 17:32:53.888299 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:32:53.888309 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:32:53.888317 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:32:53.888323 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:32:53.888329 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:32:53.888335 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:32:53.888414 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:32:53.888475 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:32:53.888531 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:32:53 UTC (1757698373) Sep 12 17:32:53.888591 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:32:53.888600 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:32:53.888612 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:32:53.888624 kernel: Segment Routing with IPv6 Sep 12 17:32:53.888636 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:32:53.888647 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:32:53.888658 kernel: Key type dns_resolver registered Sep 12 17:32:53.888668 kernel: IPI shorthand broadcast: enabled Sep 12 17:32:53.888680 kernel: sched_clock: Marking stable (1105007898, 132333952)->(1247029035, -9687185) Sep 12 17:32:53.888695 kernel: registered taskstats version 1 Sep 12 17:32:53.888703 kernel: Loading compiled-in X.509 certificates Sep 12 17:32:53.888726 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:32:53.888738 kernel: Key type .fscrypt registered Sep 12 17:32:53.888748 kernel: Key type fscrypt-provisioning registered Sep 12 17:32:53.888760 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:32:53.888770 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:32:53.888776 kernel: ima: No architecture policies found Sep 12 17:32:53.888785 kernel: clk: Disabling unused clocks Sep 12 17:32:53.888791 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:32:53.888797 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:32:53.888803 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:32:53.888810 kernel: Run /init as init process Sep 12 17:32:53.888815 kernel: with arguments: Sep 12 17:32:53.888822 kernel: /init Sep 12 17:32:53.888828 kernel: with environment: Sep 12 17:32:53.888834 kernel: HOME=/ Sep 12 17:32:53.888839 kernel: TERM=linux Sep 12 17:32:53.888847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:32:53.888856 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:32:53.888864 systemd[1]: Detected virtualization kvm. Sep 12 17:32:53.888871 systemd[1]: Detected architecture x86-64. Sep 12 17:32:53.888877 systemd[1]: Running in initrd. Sep 12 17:32:53.888883 systemd[1]: No hostname configured, using default hostname. Sep 12 17:32:53.888890 systemd[1]: Hostname set to . Sep 12 17:32:53.888898 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:32:53.888905 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:32:53.888912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:32:53.888919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:32:53.888926 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:32:53.888933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:32:53.888939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:32:53.888947 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:32:53.888956 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:32:53.888963 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:32:53.888969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:32:53.888976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:32:53.888982 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:32:53.888989 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:32:53.888995 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:32:53.889003 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:32:53.889010 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:32:53.889016 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:32:53.889022 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:32:53.889029 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:32:53.889035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:32:53.889042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:32:53.889049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:32:53.889055 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:32:53.889063 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:32:53.889069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:32:53.889076 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:32:53.889082 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:32:53.889089 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:32:53.889096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:32:53.889123 systemd-journald[187]: Collecting audit messages is disabled. Sep 12 17:32:53.889156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:53.889165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:32:53.889172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:32:53.889178 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:32:53.889189 systemd-journald[187]: Journal started Sep 12 17:32:53.889206 systemd-journald[187]: Runtime Journal (/run/log/journal/bda367f3ae5241e388b20e6ec86583b6) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:32:53.861529 systemd-modules-load[188]: Inserted module 'overlay' Sep 12 17:32:53.937953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:32:53.937982 kernel: Bridge firewalling registered Sep 12 17:32:53.937994 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:32:53.895975 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 12 17:32:53.938583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:32:53.939447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:53.945282 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:53.946667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:53.952271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:32:53.954293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:32:53.960247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:53.967310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:32:53.968063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:53.971273 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:32:53.975038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:32:53.976588 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:32:53.979136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:32:53.984978 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:32:53.989248 dracut-cmdline[217]: dracut-dracut-053 Sep 12 17:32:53.992198 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:54.011548 systemd-resolved[220]: Positive Trust Anchors: Sep 12 17:32:54.011559 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:32:54.011584 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:32:54.019916 systemd-resolved[220]: Defaulting to hostname 'linux'. Sep 12 17:32:54.020734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:32:54.021450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:32:54.041174 kernel: SCSI subsystem initialized Sep 12 17:32:54.048171 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:32:54.058203 kernel: iscsi: registered transport (tcp) Sep 12 17:32:54.074321 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:32:54.074392 kernel: QLogic iSCSI HBA Driver Sep 12 17:32:54.110047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:32:54.115290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:32:54.146211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:32:54.146284 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:32:54.146299 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:32:54.184178 kernel: raid6: avx2x4 gen() 31406 MB/s Sep 12 17:32:54.201174 kernel: raid6: avx2x2 gen() 28500 MB/s Sep 12 17:32:54.218300 kernel: raid6: avx2x1 gen() 24455 MB/s Sep 12 17:32:54.218357 kernel: raid6: using algorithm avx2x4 gen() 31406 MB/s Sep 12 17:32:54.236389 kernel: raid6: .... xor() 3928 MB/s, rmw enabled Sep 12 17:32:54.236433 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:32:54.257187 kernel: xor: automatically using best checksumming function avx Sep 12 17:32:54.372184 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:32:54.382648 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:32:54.388290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:32:54.417284 systemd-udevd[406]: Using default interface naming scheme 'v255'. Sep 12 17:32:54.420934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:32:54.430414 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:32:54.446282 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Sep 12 17:32:54.481567 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:32:54.487303 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:32:54.526681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:32:54.532284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:32:54.555232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:32:54.557058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:32:54.557696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:32:54.558137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:32:54.565285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:32:54.578674 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:32:54.593579 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:32:54.605347 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:32:54.609184 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 17:32:54.625201 kernel: libata version 3.00 loaded. Sep 12 17:32:54.629259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:32:54.629334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:54.632037 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:54.633267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:32:54.633330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:54.634832 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:54.642305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:54.648212 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:32:54.648365 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:32:54.652304 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:32:54.652503 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:32:54.661182 kernel: scsi host1: ahci Sep 12 17:32:54.684211 kernel: scsi host2: ahci Sep 12 17:32:54.688204 kernel: scsi host3: ahci Sep 12 17:32:54.689158 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 12 17:32:54.689297 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 17:32:54.689389 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:32:54.689478 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 12 17:32:54.689562 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:32:54.694299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:32:54.694330 kernel: GPT:17805311 != 80003071 Sep 12 17:32:54.694339 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:32:54.694347 kernel: GPT:17805311 != 80003071 Sep 12 17:32:54.694354 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:32:54.694361 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:54.694368 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:32:54.698162 kernel: scsi host4: ahci Sep 12 17:32:54.702548 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:32:54.702582 kernel: AES CTR mode by8 optimization enabled Sep 12 17:32:54.708175 kernel: scsi host5: ahci Sep 12 17:32:54.710162 kernel: scsi host6: ahci Sep 12 17:32:54.710281 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Sep 12 17:32:54.710292 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Sep 12 17:32:54.710299 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Sep 12 17:32:54.710306 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Sep 12 17:32:54.710314 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Sep 12 17:32:54.710326 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Sep 12 17:32:54.720199 kernel: ACPI: bus type USB registered Sep 12 17:32:54.721165 kernel: usbcore: registered new interface driver usbfs Sep 12 17:32:54.721186 kernel: usbcore: registered new interface driver hub Sep 12 17:32:54.721196 kernel: usbcore: registered new device driver usb Sep 12 17:32:54.761186 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (467) Sep 12 17:32:54.761051 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 17:32:54.764288 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (463) Sep 12 17:32:54.764544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:54.772265 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:54.776785 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 17:32:54.782414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:32:54.786787 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 17:32:54.787371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 17:32:54.802267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:32:54.803820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:54.817165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:54.817326 disk-uuid[563]: Primary Header is updated. Sep 12 17:32:54.817326 disk-uuid[563]: Secondary Entries is updated. Sep 12 17:32:54.817326 disk-uuid[563]: Secondary Header is updated. Sep 12 17:32:55.026694 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:32:55.026788 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:32:55.026801 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 17:32:55.026810 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:32:55.031163 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:32:55.031247 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:32:55.031259 kernel: ata1.00: applying bridge limits Sep 12 17:32:55.033597 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:32:55.034170 kernel: ata1.00: configured for UDMA/100 Sep 12 17:32:55.035225 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:32:55.080250 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:32:55.080510 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 17:32:55.080630 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:32:55.082551 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 17:32:55.082689 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:32:55.086179 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:32:55.087512 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 17:32:55.089687 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 17:32:55.092894 kernel: hub 1-0:1.0: USB hub found Sep 12 17:32:55.093051 kernel: hub 1-0:1.0: 4 ports detected Sep 12 17:32:55.095428 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 17:32:55.098938 kernel: hub 2-0:1.0: USB hub found Sep 12 17:32:55.099097 kernel: hub 2-0:1.0: 4 ports detected Sep 12 17:32:55.101296 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:32:55.335194 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 17:32:55.471243 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:32:55.476620 kernel: usbcore: registered new interface driver usbhid Sep 12 17:32:55.476686 kernel: usbhid: USB HID core driver Sep 12 17:32:55.482579 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 12 17:32:55.482634 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 17:32:55.831181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:55.831764 disk-uuid[564]: The operation has completed successfully. Sep 12 17:32:55.882949 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:32:55.883050 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:32:55.896279 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:32:55.899249 sh[597]: Success Sep 12 17:32:55.912182 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:32:55.961505 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:32:55.970369 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:32:55.972572 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:32:55.989703 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:32:55.989773 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:55.991176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:32:55.995183 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:32:55.995207 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:32:56.005172 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:32:56.007808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:32:56.009081 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:32:56.016279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:32:56.020280 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:32:56.033159 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:56.033215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:56.033226 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:56.039355 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:56.039401 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:56.047641 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:32:56.049531 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:56.053076 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:32:56.058417 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:32:56.101746 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:32:56.111943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:32:56.129848 ignition[705]: Ignition 2.19.0 Sep 12 17:32:56.129859 ignition[705]: Stage: fetch-offline Sep 12 17:32:56.129885 ignition[705]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:56.132312 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:32:56.129910 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:56.130015 ignition[705]: parsed url from cmdline: "" Sep 12 17:32:56.130018 ignition[705]: no config URL provided Sep 12 17:32:56.130022 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:56.130029 ignition[705]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:56.130033 ignition[705]: failed to fetch config: resource requires networking Sep 12 17:32:56.130323 ignition[705]: Ignition finished successfully Sep 12 17:32:56.143386 systemd-networkd[778]: lo: Link UP Sep 12 17:32:56.143396 systemd-networkd[778]: lo: Gained carrier Sep 12 17:32:56.144971 systemd-networkd[778]: Enumeration completed Sep 12 17:32:56.145191 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:32:56.145672 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:56.145675 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:32:56.146650 systemd[1]: Reached target network.target - Network. Sep 12 17:32:56.146824 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:56.146828 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:32:56.147309 systemd-networkd[778]: eth0: Link UP Sep 12 17:32:56.147313 systemd-networkd[778]: eth0: Gained carrier Sep 12 17:32:56.147320 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:56.150550 systemd-networkd[778]: eth1: Link UP Sep 12 17:32:56.150554 systemd-networkd[778]: eth1: Gained carrier Sep 12 17:32:56.150561 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:56.155346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:32:56.172505 ignition[786]: Ignition 2.19.0 Sep 12 17:32:56.172522 ignition[786]: Stage: fetch Sep 12 17:32:56.172765 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:56.172781 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:56.172900 ignition[786]: parsed url from cmdline: "" Sep 12 17:32:56.172907 ignition[786]: no config URL provided Sep 12 17:32:56.172915 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:56.172930 ignition[786]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:56.172959 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 17:32:56.173186 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 12 17:32:56.182235 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:32:56.209232 systemd-networkd[778]: eth0: DHCPv4 address 95.216.139.29/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:32:56.373465 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 12 17:32:56.377238 ignition[786]: GET result: OK Sep 12 17:32:56.377311 ignition[786]: parsing config with SHA512: 3bff8a2c15238caeda3b30f28c4a5d8c66f4b5819fcd3f0f4d74baf658fc98b5613445f7e85b0734df82d175566642f0d410a36563d75c55f957846a3b7fe6db Sep 12 17:32:56.380784 unknown[786]: fetched base config from "system" Sep 12 17:32:56.380793 unknown[786]: fetched base config from "system" Sep 12 17:32:56.381135 ignition[786]: fetch: fetch complete Sep 12 17:32:56.380797 unknown[786]: fetched user config from "hetzner" Sep 12 17:32:56.381178 ignition[786]: fetch: fetch passed Sep 12 17:32:56.382754 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:32:56.381225 ignition[786]: Ignition finished successfully Sep 12 17:32:56.389300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:32:56.402761 ignition[794]: Ignition 2.19.0 Sep 12 17:32:56.402773 ignition[794]: Stage: kargs Sep 12 17:32:56.402937 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:56.405078 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:32:56.402947 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:56.403828 ignition[794]: kargs: kargs passed Sep 12 17:32:56.403872 ignition[794]: Ignition finished successfully Sep 12 17:32:56.415351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:32:56.427736 ignition[800]: Ignition 2.19.0 Sep 12 17:32:56.427752 ignition[800]: Stage: disks Sep 12 17:32:56.427927 ignition[800]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:56.427938 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:56.429770 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:32:56.428788 ignition[800]: disks: disks passed Sep 12 17:32:56.435685 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:32:56.428829 ignition[800]: Ignition finished successfully Sep 12 17:32:56.436549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:32:56.437774 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:32:56.438826 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:32:56.440120 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:32:56.445361 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:32:56.460628 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:32:56.463823 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:32:56.469325 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:32:56.537171 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:32:56.537751 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:32:56.538886 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:32:56.554344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:56.558270 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:32:56.560341 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:32:56.564215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:32:56.572405 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Sep 12 17:32:56.572436 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:56.572451 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:56.572466 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:56.564271 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:32:56.576248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:32:56.578510 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:32:56.583772 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:56.583816 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:56.589570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:56.629804 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:32:56.636143 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:32:56.639867 coreos-metadata[819]: Sep 12 17:32:56.639 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 17:32:56.641499 coreos-metadata[819]: Sep 12 17:32:56.640 INFO Fetch successful Sep 12 17:32:56.641499 coreos-metadata[819]: Sep 12 17:32:56.641 INFO wrote hostname ci-4081-3-6-c-e429241c3f to /sysroot/etc/hostname Sep 12 17:32:56.643045 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:32:56.645702 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:32:56.648718 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:32:56.717311 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:32:56.723227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:32:56.726280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:32:56.732204 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:56.746535 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:32:56.752849 ignition[934]: INFO : Ignition 2.19.0 Sep 12 17:32:56.753924 ignition[934]: INFO : Stage: mount Sep 12 17:32:56.753924 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:56.753924 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:56.755508 ignition[934]: INFO : mount: mount passed Sep 12 17:32:56.755508 ignition[934]: INFO : Ignition finished successfully Sep 12 17:32:56.756139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:32:56.762240 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:32:56.987569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:32:56.993322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:57.003175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Sep 12 17:32:57.006394 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:57.006436 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:57.008928 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:57.013429 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:57.013461 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:57.016958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:57.032956 ignition[961]: INFO : Ignition 2.19.0 Sep 12 17:32:57.032956 ignition[961]: INFO : Stage: files Sep 12 17:32:57.034398 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:57.034398 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:57.036128 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:32:57.037004 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:32:57.037004 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:32:57.038876 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:32:57.039946 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:32:57.040937 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:32:57.039951 unknown[961]: wrote ssh authorized keys file for user: core Sep 12 17:32:57.042756 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:32:57.042756 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:32:57.278874 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:32:57.531393 systemd-networkd[778]: eth0: Gained IPv6LL Sep 12 17:32:57.595364 systemd-networkd[778]: eth1: Gained IPv6LL Sep 12 17:32:57.709563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:32:57.709563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:32:57.713444 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:32:58.081573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:32:58.315519 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:32:58.315519 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:32:58.319239 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:32:58.319239 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:32:58.319239 ignition[961]: INFO : files: files passed Sep 12 17:32:58.319239 ignition[961]: INFO : Ignition finished successfully Sep 12 17:32:58.320777 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:32:58.329357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:32:58.333356 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:32:58.339469 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:32:58.339614 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:32:58.359277 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:32:58.359277 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:32:58.362489 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:32:58.361560 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:32:58.363781 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:32:58.371400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:32:58.415915 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:32:58.416073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:32:58.418841 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:32:58.420703 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:32:58.423007 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:32:58.431472 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:32:58.450773 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:32:58.458442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:32:58.477233 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:32:58.478442 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:32:58.480848 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:32:58.482962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:32:58.483140 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:32:58.485639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:32:58.487045 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:32:58.489181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:32:58.491281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:32:58.493139 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:32:58.495335 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:32:58.497532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:32:58.499812 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:32:58.501937 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:32:58.504132 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:32:58.506191 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:32:58.506363 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:32:58.508813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:32:58.510245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:32:58.512193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:32:58.512353 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:32:58.514520 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:32:58.514781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:32:58.517819 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:32:58.517933 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:32:58.518836 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:32:58.518972 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:32:58.519900 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:32:58.520080 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:32:58.528502 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:32:58.532619 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:32:58.533114 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:32:58.544660 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:32:58.545916 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:32:58.546245 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:32:58.556100 ignition[1014]: INFO : Ignition 2.19.0 Sep 12 17:32:58.556100 ignition[1014]: INFO : Stage: umount Sep 12 17:32:58.556100 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:58.556100 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:32:58.556100 ignition[1014]: INFO : umount: umount passed Sep 12 17:32:58.556100 ignition[1014]: INFO : Ignition finished successfully Sep 12 17:32:58.551664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:32:58.555194 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:32:58.565478 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:32:58.565633 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:32:58.572509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:32:58.572681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:32:58.585846 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:32:58.585968 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:32:58.587191 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:32:58.587281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:32:58.593284 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:32:58.593338 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:32:58.598454 systemd[1]: Stopped target network.target - Network. Sep 12 17:32:58.600474 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:32:58.600534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:32:58.602529 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:32:58.604439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:32:58.604489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:32:58.606925 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:32:58.616851 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:32:58.618135 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:32:58.618261 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:32:58.628801 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:32:58.628868 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:32:58.630275 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:32:58.630347 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:32:58.631300 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:32:58.631363 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:32:58.632949 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:32:58.635115 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:32:58.636246 systemd-networkd[778]: eth1: DHCPv6 lease lost Sep 12 17:32:58.641510 systemd-networkd[778]: eth0: DHCPv6 lease lost Sep 12 17:32:58.642388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:32:58.643609 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:32:58.643829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:32:58.647420 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:32:58.647619 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:32:58.652768 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:32:58.652989 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:32:58.657501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:32:58.657616 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:32:58.659618 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:32:58.659697 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:32:58.667335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:32:58.669869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:32:58.669985 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:32:58.676012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:32:58.676112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:58.678997 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:32:58.679093 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:32:58.684843 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:32:58.684957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:32:58.687920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:32:58.708443 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:32:58.709621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:32:58.711564 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:32:58.711665 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:32:58.713981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:32:58.714076 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:32:58.715385 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:32:58.715425 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:32:58.717050 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:32:58.717106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:32:58.719811 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:32:58.719866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:32:58.721753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:32:58.721811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:58.729522 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:32:58.731791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:32:58.731866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:32:58.735409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:32:58.735473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:58.737001 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:32:58.737091 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:32:58.738987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:32:58.750589 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:32:58.759008 systemd[1]: Switching root. Sep 12 17:32:58.808679 systemd-journald[187]: Journal stopped Sep 12 17:32:59.629368 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 12 17:32:59.629418 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:32:59.629429 kernel: SELinux: policy capability open_perms=1 Sep 12 17:32:59.629440 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:32:59.629450 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:32:59.629457 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:32:59.629469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:32:59.629478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:32:59.629486 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:32:59.629494 kernel: audit: type=1403 audit(1757698378.922:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:32:59.629503 systemd[1]: Successfully loaded SELinux policy in 33.819ms. Sep 12 17:32:59.629515 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.975ms. Sep 12 17:32:59.629524 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:32:59.629535 systemd[1]: Detected virtualization kvm. Sep 12 17:32:59.629543 systemd[1]: Detected architecture x86-64. Sep 12 17:32:59.629551 systemd[1]: Detected first boot. Sep 12 17:32:59.629562 systemd[1]: Hostname set to . Sep 12 17:32:59.629571 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:32:59.629579 zram_generator::config[1056]: No configuration found. Sep 12 17:32:59.629588 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:32:59.629597 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:32:59.629606 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:32:59.629614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:32:59.629623 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:32:59.629633 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:32:59.629642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:32:59.629650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:32:59.629659 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:32:59.629669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:32:59.629678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:32:59.629686 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:32:59.629695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:32:59.629703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:32:59.629714 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:32:59.629723 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:32:59.629732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:32:59.629752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:32:59.629761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:32:59.629769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:32:59.629778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:32:59.629789 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:32:59.629797 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:32:59.629806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:32:59.629815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:32:59.629823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:32:59.629831 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:32:59.629839 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:32:59.629847 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:32:59.629857 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:32:59.629865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:32:59.629874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:32:59.629882 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:32:59.629889 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:32:59.629897 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:32:59.629905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:32:59.629914 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:32:59.629922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:32:59.629932 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:32:59.629940 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:32:59.629949 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:32:59.629957 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:32:59.629965 systemd[1]: Reached target machines.target - Containers. Sep 12 17:32:59.629973 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:32:59.629984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:32:59.629994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:32:59.630003 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:32:59.630011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:32:59.630019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:32:59.630027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:32:59.630035 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:32:59.630043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:32:59.630053 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:32:59.630062 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:32:59.630071 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:32:59.630079 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:32:59.630089 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:32:59.630098 kernel: fuse: init (API version 7.39) Sep 12 17:32:59.630106 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:32:59.630115 kernel: loop: module loaded Sep 12 17:32:59.630122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:32:59.630132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:32:59.630140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:32:59.630188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:32:59.630199 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:32:59.630207 systemd[1]: Stopped verity-setup.service. Sep 12 17:32:59.630215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:32:59.630224 kernel: ACPI: bus type drm_connector registered Sep 12 17:32:59.630232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:32:59.630243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:32:59.630251 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:32:59.630274 systemd-journald[1133]: Collecting audit messages is disabled. Sep 12 17:32:59.630293 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:32:59.630303 systemd-journald[1133]: Journal started Sep 12 17:32:59.630326 systemd-journald[1133]: Runtime Journal (/run/log/journal/bda367f3ae5241e388b20e6ec86583b6) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:32:59.362384 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:32:59.379008 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:32:59.379417 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:32:59.634894 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:32:59.635874 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:32:59.636445 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:32:59.637104 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:32:59.637854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:32:59.638605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:32:59.638716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:32:59.639370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:32:59.639457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:32:59.640066 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:32:59.640514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:32:59.641193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:32:59.641346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:32:59.642039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:32:59.642309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:32:59.643007 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:32:59.643209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:32:59.643880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:32:59.644555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:32:59.645487 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:32:59.653007 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:32:59.659000 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:32:59.662202 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:32:59.663661 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:32:59.663691 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:32:59.665241 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:32:59.677351 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:32:59.684320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:32:59.685430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:32:59.689231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:32:59.690604 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:32:59.691602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:32:59.697302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:32:59.697878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:32:59.701269 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:59.712668 systemd-journald[1133]: Time spent on flushing to /var/log/journal/bda367f3ae5241e388b20e6ec86583b6 is 62.927ms for 1123 entries. Sep 12 17:32:59.712668 systemd-journald[1133]: System Journal (/var/log/journal/bda367f3ae5241e388b20e6ec86583b6) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:32:59.782999 systemd-journald[1133]: Received client request to flush runtime journal. Sep 12 17:32:59.783067 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:32:59.703756 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:32:59.706277 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:32:59.711493 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:32:59.712508 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:32:59.720879 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:32:59.722562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:32:59.745720 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:32:59.748358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:32:59.748985 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:32:59.755365 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:32:59.783030 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:32:59.784503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:32:59.791116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:59.799941 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:32:59.800666 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:32:59.810288 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:32:59.816431 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:32:59.818908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:32:59.838105 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 12 17:32:59.838249 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 12 17:32:59.841088 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:32:59.846531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:32:59.894182 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:32:59.934257 kernel: loop3: detected capacity change from 0 to 8 Sep 12 17:32:59.950260 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:32:59.974188 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:33:00.002182 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:33:00.023211 kernel: loop7: detected capacity change from 0 to 8 Sep 12 17:33:00.023680 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 17:33:00.024095 (sd-merge)[1202]: Merged extensions into '/usr'. Sep 12 17:33:00.027766 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:33:00.027867 systemd[1]: Reloading... Sep 12 17:33:00.094170 zram_generator::config[1228]: No configuration found. Sep 12 17:33:00.191073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:00.193817 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:33:00.229582 systemd[1]: Reloading finished in 201 ms. Sep 12 17:33:00.249496 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:33:00.250242 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:33:00.259658 systemd[1]: Starting ensure-sysext.service... Sep 12 17:33:00.261130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:00.266961 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:33:00.266972 systemd[1]: Reloading... Sep 12 17:33:00.287775 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:33:00.288015 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:33:00.288583 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:33:00.288859 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 17:33:00.288960 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 17:33:00.293798 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:00.293807 systemd-tmpfiles[1273]: Skipping /boot Sep 12 17:33:00.304959 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:00.307335 systemd-tmpfiles[1273]: Skipping /boot Sep 12 17:33:00.331176 zram_generator::config[1299]: No configuration found. Sep 12 17:33:00.415565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:00.452787 systemd[1]: Reloading finished in 185 ms. Sep 12 17:33:00.466636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:33:00.471493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:00.476314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:00.480284 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:33:00.482911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:33:00.487261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:00.489266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:00.494067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:33:00.499025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.499179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:00.506341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:00.508504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:00.511963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:00.512628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:00.512716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.521541 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:33:00.526097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.526652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:00.527039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:00.527194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.531664 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:33:00.533499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.533838 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 12 17:33:00.534295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:00.548514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:33:00.549249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:00.549367 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.550109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:00.555635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:00.557568 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:33:00.558388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:00.558679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:00.559973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:00.560061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:00.563978 systemd[1]: Finished ensure-sysext.service. Sep 12 17:33:00.564987 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:33:00.565169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:33:00.568101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:00.568327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:00.577324 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:33:00.584267 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:33:00.584877 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:33:00.585400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:00.601373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:33:00.603964 augenrules[1391]: No rules Sep 12 17:33:00.604170 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:00.612106 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:33:00.647929 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:33:00.648920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:00.696647 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:33:00.699934 systemd-networkd[1386]: lo: Link UP Sep 12 17:33:00.702202 systemd-networkd[1386]: lo: Gained carrier Sep 12 17:33:00.704382 systemd-networkd[1386]: Enumeration completed Sep 12 17:33:00.704479 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:33:00.713344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:33:00.713948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:33:00.714468 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:33:00.715292 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:00.715297 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:00.715841 systemd-networkd[1386]: eth0: Link UP Sep 12 17:33:00.715845 systemd-networkd[1386]: eth0: Gained carrier Sep 12 17:33:00.715858 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:00.724321 systemd-resolved[1348]: Positive Trust Anchors: Sep 12 17:33:00.724554 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:00.724621 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:00.730129 systemd-resolved[1348]: Using system hostname 'ci-4081-3-6-c-e429241c3f'. Sep 12 17:33:00.733697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:00.735261 systemd[1]: Reached target network.target - Network. Sep 12 17:33:00.735691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:00.741266 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:33:00.744169 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:33:00.750822 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:00.759403 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:00.759415 systemd-networkd[1386]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:00.759886 systemd-networkd[1386]: eth1: Link UP Sep 12 17:33:00.759895 systemd-networkd[1386]: eth1: Gained carrier Sep 12 17:33:00.759908 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:00.772188 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:33:00.772209 systemd-networkd[1386]: eth0: DHCPv4 address 95.216.139.29/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:33:00.773380 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 12 17:33:00.780917 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 12 17:33:00.780981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.781086 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:00.784343 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1405) Sep 12 17:33:00.784521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:00.785210 systemd-networkd[1386]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:33:00.794381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:00.796379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:00.798096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:00.798127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:00.798140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:00.798389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:00.798492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:00.799042 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:00.804076 systemd-timesyncd[1377]: Contacted time server 78.47.93.191:123 (0.flatcar.pool.ntp.org). Sep 12 17:33:00.804202 systemd-timesyncd[1377]: Initial clock synchronization to Fri 2025-09-12 17:33:00.951897 UTC. Sep 12 17:33:00.812731 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:00.812858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:00.817263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:00.817372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:00.817998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:00.825255 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:33:00.828163 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 12 17:33:00.835183 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 12 17:33:00.856451 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:33:00.859230 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:33:00.859254 kernel: [drm] features: -context_init Sep 12 17:33:00.859275 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:33:00.859415 kernel: [drm] number of scanouts: 1 Sep 12 17:33:00.862942 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:33:00.863098 kernel: [drm] number of cap sets: 0 Sep 12 17:33:00.863112 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:33:00.864474 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:33:00.875293 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 17:33:00.876632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:00.882335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:33:00.886520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:33:00.897004 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:33:00.897037 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:33:00.913193 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:33:00.923966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:33:00.925591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:00.925728 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:00.933328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:00.986295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:01.051324 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:33:01.059426 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:33:01.071309 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:01.096078 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:33:01.096415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:01.096494 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:33:01.096829 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:33:01.098362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:33:01.098823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:33:01.099153 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:33:01.099363 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:33:01.099498 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:33:01.099543 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:01.099667 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:01.104133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:33:01.105799 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:33:01.116389 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:33:01.119488 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:33:01.122781 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:33:01.123063 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:01.123790 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:33:01.126447 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:01.126508 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:01.128503 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:01.129431 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:33:01.143397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:33:01.157481 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:33:01.164896 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:33:01.169302 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:33:01.171666 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:33:01.173070 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:33:01.181273 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:33:01.190382 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 17:33:01.202295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:33:01.204759 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:33:01.211861 coreos-metadata[1459]: Sep 12 17:33:01.211 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 17:33:01.211861 coreos-metadata[1459]: Sep 12 17:33:01.211 INFO Fetch successful Sep 12 17:33:01.211861 coreos-metadata[1459]: Sep 12 17:33:01.211 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 17:33:01.211861 coreos-metadata[1459]: Sep 12 17:33:01.211 INFO Fetch successful Sep 12 17:33:01.217337 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:33:01.220449 jq[1463]: false Sep 12 17:33:01.219546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:33:01.219961 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:33:01.229339 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:33:01.237337 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:33:01.239982 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:33:01.244544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:33:01.245238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:33:01.246799 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:33:01.247315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:33:01.248980 jq[1480]: true Sep 12 17:33:01.252691 dbus-daemon[1462]: [system] SELinux support is enabled Sep 12 17:33:01.255357 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:33:01.264125 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:33:01.265737 jq[1487]: true Sep 12 17:33:01.265723 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:33:01.267816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:33:01.267842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:33:01.280556 extend-filesystems[1464]: Found loop4 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found loop5 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found loop6 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found loop7 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda1 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda2 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda3 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found usr Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda4 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda6 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda7 Sep 12 17:33:01.296577 extend-filesystems[1464]: Found sda9 Sep 12 17:33:01.296577 extend-filesystems[1464]: Checking size of /dev/sda9 Sep 12 17:33:01.380594 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 17:33:01.380648 update_engine[1478]: I20250912 17:33:01.289388 1478 main.cc:92] Flatcar Update Engine starting Sep 12 17:33:01.380648 update_engine[1478]: I20250912 17:33:01.292184 1478 update_check_scheduler.cc:74] Next update check in 8m28s Sep 12 17:33:01.292700 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:33:01.395665 extend-filesystems[1464]: Resized partition /dev/sda9 Sep 12 17:33:01.294898 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:33:01.425754 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:33:01.304494 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:33:01.427374 tar[1484]: linux-amd64/helm Sep 12 17:33:01.316782 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:33:01.316929 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:33:01.385391 systemd-logind[1473]: New seat seat0. Sep 12 17:33:01.422943 systemd-logind[1473]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:33:01.422956 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:33:01.423331 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:33:01.459446 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:33:01.460789 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:33:01.466497 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:01.476016 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1393) Sep 12 17:33:01.469970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:33:01.490047 systemd[1]: Starting sshkeys.service... Sep 12 17:33:01.510269 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 17:33:01.528735 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:33:01.539439 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:33:01.549611 extend-filesystems[1510]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:33:01.549611 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 17:33:01.549611 extend-filesystems[1510]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 17:33:01.550728 extend-filesystems[1464]: Resized filesystem in /dev/sda9 Sep 12 17:33:01.550728 extend-filesystems[1464]: Found sr0 Sep 12 17:33:01.555534 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:33:01.563264 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:33:01.555694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:33:01.585241 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:33:01.607311 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:33:01.620494 coreos-metadata[1540]: Sep 12 17:33:01.620 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 17:33:01.621416 coreos-metadata[1540]: Sep 12 17:33:01.621 INFO Fetch successful Sep 12 17:33:01.626044 unknown[1540]: wrote ssh authorized keys file for user: core Sep 12 17:33:01.628039 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:33:01.630393 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:33:01.630890 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:33:01.642521 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:33:01.646996 containerd[1498]: time="2025-09-12T17:33:01.646427144Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:33:01.662306 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:33:01.670663 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:01.673581 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:33:01.676310 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:33:01.680129 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:33:01.681928 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:33:01.689373 systemd[1]: Finished sshkeys.service. Sep 12 17:33:01.701121 containerd[1498]: time="2025-09-12T17:33:01.701089370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.702548 containerd[1498]: time="2025-09-12T17:33:01.702521247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:01.702629 containerd[1498]: time="2025-09-12T17:33:01.702616166Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:33:01.702678 containerd[1498]: time="2025-09-12T17:33:01.702667604Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:33:01.702917 containerd[1498]: time="2025-09-12T17:33:01.702885011Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:33:01.702991 containerd[1498]: time="2025-09-12T17:33:01.702965429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703114 containerd[1498]: time="2025-09-12T17:33:01.703084050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703185 containerd[1498]: time="2025-09-12T17:33:01.703173324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703407 containerd[1498]: time="2025-09-12T17:33:01.703390283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703484 containerd[1498]: time="2025-09-12T17:33:01.703471394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703564 containerd[1498]: time="2025-09-12T17:33:01.703550006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703608 containerd[1498]: time="2025-09-12T17:33:01.703597924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.703754 containerd[1498]: time="2025-09-12T17:33:01.703740648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.704038 containerd[1498]: time="2025-09-12T17:33:01.704024003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:01.704228 containerd[1498]: time="2025-09-12T17:33:01.704212237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:01.704294 containerd[1498]: time="2025-09-12T17:33:01.704282337Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:33:01.704422 containerd[1498]: time="2025-09-12T17:33:01.704407174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:33:01.704523 containerd[1498]: time="2025-09-12T17:33:01.704509836Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:33:01.708424 containerd[1498]: time="2025-09-12T17:33:01.708406616Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:33:01.708519 containerd[1498]: time="2025-09-12T17:33:01.708506013Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:33:01.708606 containerd[1498]: time="2025-09-12T17:33:01.708593819Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:33:01.708807 containerd[1498]: time="2025-09-12T17:33:01.708660125Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:33:01.708807 containerd[1498]: time="2025-09-12T17:33:01.708677146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:33:01.708807 containerd[1498]: time="2025-09-12T17:33:01.708768809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:33:01.709075 containerd[1498]: time="2025-09-12T17:33:01.709051204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:33:01.709230 containerd[1498]: time="2025-09-12T17:33:01.709215040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:33:01.709290 containerd[1498]: time="2025-09-12T17:33:01.709279039Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:33:01.709332 containerd[1498]: time="2025-09-12T17:33:01.709322753Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709362702Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709377100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709387344Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709398793Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709414171Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709424130Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709435762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709445069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709461150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709481221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709494742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709508803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709518987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709699 containerd[1498]: time="2025-09-12T17:33:01.709529507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709538741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709549200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709559150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709570823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709580302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709589109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709598435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709609762Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709642404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709653639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.709941 containerd[1498]: time="2025-09-12T17:33:01.709662068Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:33:01.710119 containerd[1498]: time="2025-09-12T17:33:01.710105452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:33:01.710234 containerd[1498]: time="2025-09-12T17:33:01.710219278Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710269962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710285176Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710294043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710304277Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710312256Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:33:01.710909 containerd[1498]: time="2025-09-12T17:33:01.710320460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:33:01.711015 containerd[1498]: time="2025-09-12T17:33:01.710527173Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:33:01.711015 containerd[1498]: time="2025-09-12T17:33:01.710574204Z" level=info msg="Connect containerd service" Sep 12 17:33:01.711015 containerd[1498]: time="2025-09-12T17:33:01.710598745Z" level=info msg="using legacy CRI server" Sep 12 17:33:01.711015 containerd[1498]: time="2025-09-12T17:33:01.710604122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:33:01.711015 containerd[1498]: time="2025-09-12T17:33:01.710698132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:33:01.711520 containerd[1498]: time="2025-09-12T17:33:01.711501403Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:33:01.711703 containerd[1498]: time="2025-09-12T17:33:01.711663411Z" level=info msg="Start subscribing containerd event" Sep 12 17:33:01.711964 containerd[1498]: time="2025-09-12T17:33:01.711949674Z" level=info msg="Start recovering state" Sep 12 17:33:01.712052 containerd[1498]: time="2025-09-12T17:33:01.712040775Z" level=info msg="Start event monitor" Sep 12 17:33:01.712101 containerd[1498]: time="2025-09-12T17:33:01.712091826Z" level=info msg="Start snapshots syncer" Sep 12 17:33:01.712179 containerd[1498]: time="2025-09-12T17:33:01.712144295Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:33:01.712222 containerd[1498]: time="2025-09-12T17:33:01.712211784Z" level=info msg="Start streaming server" Sep 12 17:33:01.712323 containerd[1498]: time="2025-09-12T17:33:01.711913930Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:33:01.712400 containerd[1498]: time="2025-09-12T17:33:01.712388080Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:33:01.712602 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:33:01.715760 containerd[1498]: time="2025-09-12T17:33:01.715722027Z" level=info msg="containerd successfully booted in 0.074471s" Sep 12 17:33:01.961209 tar[1484]: linux-amd64/LICENSE Sep 12 17:33:01.961470 tar[1484]: linux-amd64/README.md Sep 12 17:33:01.972375 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:33:02.075683 systemd-networkd[1386]: eth0: Gained IPv6LL Sep 12 17:33:02.079438 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:33:02.083037 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:33:02.091465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:02.095859 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:33:02.129411 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:33:02.140549 systemd-networkd[1386]: eth1: Gained IPv6LL Sep 12 17:33:03.036013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:03.039762 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:33:03.043214 systemd[1]: Startup finished in 1.225s (kernel) + 5.247s (initrd) + 4.153s (userspace) = 10.626s. Sep 12 17:33:03.048422 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:03.605844 kubelet[1590]: E0912 17:33:03.605766 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:03.608529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:03.608761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:03.609068 systemd[1]: kubelet.service: Consumed 1.023s CPU time. Sep 12 17:33:10.380256 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:33:10.392846 systemd[1]: Started sshd@0-95.216.139.29:22-147.75.109.163:55556.service - OpenSSH per-connection server daemon (147.75.109.163:55556). Sep 12 17:33:11.475134 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 55556 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:11.477193 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:11.486066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:33:11.491599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:33:11.493906 systemd-logind[1473]: New session 1 of user core. Sep 12 17:33:11.504567 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:33:11.508510 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:33:11.515353 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:33:11.598130 systemd[1606]: Queued start job for default target default.target. Sep 12 17:33:11.608880 systemd[1606]: Created slice app.slice - User Application Slice. Sep 12 17:33:11.608902 systemd[1606]: Reached target paths.target - Paths. Sep 12 17:33:11.608912 systemd[1606]: Reached target timers.target - Timers. Sep 12 17:33:11.609948 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:33:11.619645 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:33:11.619683 systemd[1606]: Reached target sockets.target - Sockets. Sep 12 17:33:11.619695 systemd[1606]: Reached target basic.target - Basic System. Sep 12 17:33:11.619722 systemd[1606]: Reached target default.target - Main User Target. Sep 12 17:33:11.619741 systemd[1606]: Startup finished in 98ms. Sep 12 17:33:11.620040 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:33:11.627282 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:33:12.366535 systemd[1]: Started sshd@1-95.216.139.29:22-147.75.109.163:55568.service - OpenSSH per-connection server daemon (147.75.109.163:55568). Sep 12 17:33:13.337791 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 55568 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:13.339420 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:13.344817 systemd-logind[1473]: New session 2 of user core. Sep 12 17:33:13.346297 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:33:13.855844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:33:13.862562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:13.993658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:13.997016 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:14.014576 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:14.019736 systemd[1]: sshd@1-95.216.139.29:22-147.75.109.163:55568.service: Deactivated successfully. Sep 12 17:33:14.022579 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:33:14.023699 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:33:14.024746 systemd-logind[1473]: Removed session 2. Sep 12 17:33:14.039405 kubelet[1629]: E0912 17:33:14.039339 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:14.043992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:14.044197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:14.185520 systemd[1]: Started sshd@2-95.216.139.29:22-147.75.109.163:55578.service - OpenSSH per-connection server daemon (147.75.109.163:55578). Sep 12 17:33:15.154366 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 55578 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:15.156016 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:15.162229 systemd-logind[1473]: New session 3 of user core. Sep 12 17:33:15.167353 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:33:15.825260 sshd[1639]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:15.828377 systemd[1]: sshd@2-95.216.139.29:22-147.75.109.163:55578.service: Deactivated successfully. Sep 12 17:33:15.830763 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:33:15.832603 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:33:15.834086 systemd-logind[1473]: Removed session 3. Sep 12 17:33:16.000531 systemd[1]: Started sshd@3-95.216.139.29:22-147.75.109.163:55580.service - OpenSSH per-connection server daemon (147.75.109.163:55580). Sep 12 17:33:16.972555 sshd[1646]: Accepted publickey for core from 147.75.109.163 port 55580 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:16.974670 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:16.981225 systemd-logind[1473]: New session 4 of user core. Sep 12 17:33:16.989415 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:33:17.648005 sshd[1646]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:17.650367 systemd[1]: sshd@3-95.216.139.29:22-147.75.109.163:55580.service: Deactivated successfully. Sep 12 17:33:17.651781 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:33:17.652721 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:33:17.653822 systemd-logind[1473]: Removed session 4. Sep 12 17:33:17.814407 systemd[1]: Started sshd@4-95.216.139.29:22-147.75.109.163:55588.service - OpenSSH per-connection server daemon (147.75.109.163:55588). Sep 12 17:33:18.786094 sshd[1653]: Accepted publickey for core from 147.75.109.163 port 55588 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:18.787388 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:18.792069 systemd-logind[1473]: New session 5 of user core. Sep 12 17:33:18.793301 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:33:19.311334 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:33:19.311614 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:19.327704 sudo[1656]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:19.486681 sshd[1653]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:19.491782 systemd[1]: sshd@4-95.216.139.29:22-147.75.109.163:55588.service: Deactivated successfully. Sep 12 17:33:19.494631 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:33:19.496470 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:33:19.498381 systemd-logind[1473]: Removed session 5. Sep 12 17:33:19.688769 systemd[1]: Started sshd@5-95.216.139.29:22-147.75.109.163:55598.service - OpenSSH per-connection server daemon (147.75.109.163:55598). Sep 12 17:33:20.767717 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 55598 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:20.769223 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:20.774127 systemd-logind[1473]: New session 6 of user core. Sep 12 17:33:20.779319 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:33:21.340002 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:33:21.340327 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:21.343291 sudo[1665]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:21.347585 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:33:21.347840 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:21.364384 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:21.366029 auditctl[1668]: No rules Sep 12 17:33:21.366527 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:33:21.366712 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:21.368670 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:21.390048 augenrules[1686]: No rules Sep 12 17:33:21.390589 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:21.392533 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:21.568495 sshd[1661]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:21.571003 systemd[1]: sshd@5-95.216.139.29:22-147.75.109.163:55598.service: Deactivated successfully. Sep 12 17:33:21.572447 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:33:21.573351 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:33:21.574393 systemd-logind[1473]: Removed session 6. Sep 12 17:33:21.760612 systemd[1]: Started sshd@6-95.216.139.29:22-147.75.109.163:47892.service - OpenSSH per-connection server daemon (147.75.109.163:47892). Sep 12 17:33:22.832845 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 47892 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:33:22.834295 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:22.840330 systemd-logind[1473]: New session 7 of user core. Sep 12 17:33:22.846326 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:33:23.407721 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:33:23.408246 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:23.780682 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:33:23.781908 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:33:24.069456 dockerd[1714]: time="2025-09-12T17:33:24.069279287Z" level=info msg="Starting up" Sep 12 17:33:24.076605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:33:24.083340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:24.174140 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport152217523-merged.mount: Deactivated successfully. Sep 12 17:33:24.211470 systemd[1]: var-lib-docker-metacopy\x2dcheck720447581-merged.mount: Deactivated successfully. Sep 12 17:33:24.236963 dockerd[1714]: time="2025-09-12T17:33:24.236933651Z" level=info msg="Loading containers: start." Sep 12 17:33:24.237674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:24.248537 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:24.302801 kubelet[1740]: E0912 17:33:24.302713 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:24.304747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:24.305051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:24.332177 kernel: Initializing XFRM netlink socket Sep 12 17:33:24.429283 systemd-networkd[1386]: docker0: Link UP Sep 12 17:33:24.442313 dockerd[1714]: time="2025-09-12T17:33:24.442254384Z" level=info msg="Loading containers: done." Sep 12 17:33:24.458083 dockerd[1714]: time="2025-09-12T17:33:24.458033327Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:33:24.458248 dockerd[1714]: time="2025-09-12T17:33:24.458129128Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:33:24.458279 dockerd[1714]: time="2025-09-12T17:33:24.458265079Z" level=info msg="Daemon has completed initialization" Sep 12 17:33:24.488362 dockerd[1714]: time="2025-09-12T17:33:24.486570283Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:33:24.488283 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:33:25.550238 containerd[1498]: time="2025-09-12T17:33:25.550140720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:33:26.118400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670995757.mount: Deactivated successfully. Sep 12 17:33:27.195012 containerd[1498]: time="2025-09-12T17:33:27.194944896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.196112 containerd[1498]: time="2025-09-12T17:33:27.195905458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117224" Sep 12 17:33:27.197985 containerd[1498]: time="2025-09-12T17:33:27.197000750Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.199320 containerd[1498]: time="2025-09-12T17:33:27.199299717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.200081 containerd[1498]: time="2025-09-12T17:33:27.200055007Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.649829847s" Sep 12 17:33:27.200117 containerd[1498]: time="2025-09-12T17:33:27.200087406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:33:27.201083 containerd[1498]: time="2025-09-12T17:33:27.201064890Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:33:28.286024 containerd[1498]: time="2025-09-12T17:33:28.285971975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:28.287324 containerd[1498]: time="2025-09-12T17:33:28.287107495Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716654" Sep 12 17:33:28.288807 containerd[1498]: time="2025-09-12T17:33:28.288567234Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:28.291235 containerd[1498]: time="2025-09-12T17:33:28.291200973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:28.292650 containerd[1498]: time="2025-09-12T17:33:28.291941985Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.090851894s" Sep 12 17:33:28.292650 containerd[1498]: time="2025-09-12T17:33:28.291968077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:33:28.292650 containerd[1498]: time="2025-09-12T17:33:28.292530564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:33:29.384561 containerd[1498]: time="2025-09-12T17:33:29.384500777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:29.385545 containerd[1498]: time="2025-09-12T17:33:29.385506394Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787720" Sep 12 17:33:29.386507 containerd[1498]: time="2025-09-12T17:33:29.386199336Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:29.389470 containerd[1498]: time="2025-09-12T17:33:29.389444686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:29.390301 containerd[1498]: time="2025-09-12T17:33:29.390276262Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.097725591s" Sep 12 17:33:29.390341 containerd[1498]: time="2025-09-12T17:33:29.390305008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:33:29.390698 containerd[1498]: time="2025-09-12T17:33:29.390660671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:33:30.451524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052676551.mount: Deactivated successfully. Sep 12 17:33:30.702516 containerd[1498]: time="2025-09-12T17:33:30.702238944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.703301 containerd[1498]: time="2025-09-12T17:33:30.703234874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410280" Sep 12 17:33:30.704177 containerd[1498]: time="2025-09-12T17:33:30.703902542Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.705425 containerd[1498]: time="2025-09-12T17:33:30.705382575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:30.706338 containerd[1498]: time="2025-09-12T17:33:30.705984736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.315291803s" Sep 12 17:33:30.706338 containerd[1498]: time="2025-09-12T17:33:30.706014523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:33:30.706596 containerd[1498]: time="2025-09-12T17:33:30.706570019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:33:31.199168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084605138.mount: Deactivated successfully. Sep 12 17:33:31.891640 containerd[1498]: time="2025-09-12T17:33:31.891560672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.892739 containerd[1498]: time="2025-09-12T17:33:31.892692826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 12 17:33:31.895163 containerd[1498]: time="2025-09-12T17:33:31.893524335Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.896702 containerd[1498]: time="2025-09-12T17:33:31.896679521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:31.897750 containerd[1498]: time="2025-09-12T17:33:31.897718389Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.191115667s" Sep 12 17:33:31.897790 containerd[1498]: time="2025-09-12T17:33:31.897755451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:33:31.898469 containerd[1498]: time="2025-09-12T17:33:31.898445898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:33:32.350431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184166471.mount: Deactivated successfully. Sep 12 17:33:32.355594 containerd[1498]: time="2025-09-12T17:33:32.355503881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.356604 containerd[1498]: time="2025-09-12T17:33:32.356543888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 12 17:33:32.357299 containerd[1498]: time="2025-09-12T17:33:32.357235828Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.359980 containerd[1498]: time="2025-09-12T17:33:32.359935254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:32.361729 containerd[1498]: time="2025-09-12T17:33:32.360826498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 462.247165ms" Sep 12 17:33:32.361729 containerd[1498]: time="2025-09-12T17:33:32.360856071Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:33:32.361928 containerd[1498]: time="2025-09-12T17:33:32.361888642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:33:32.851327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413032506.mount: Deactivated successfully. Sep 12 17:33:34.097192 containerd[1498]: time="2025-09-12T17:33:34.097126690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.098091 containerd[1498]: time="2025-09-12T17:33:34.098055658Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910785" Sep 12 17:33:34.099166 containerd[1498]: time="2025-09-12T17:33:34.098750305Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.100910 containerd[1498]: time="2025-09-12T17:33:34.100880283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:34.102175 containerd[1498]: time="2025-09-12T17:33:34.101808059Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.739893482s" Sep 12 17:33:34.102175 containerd[1498]: time="2025-09-12T17:33:34.101837733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:33:34.384627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:33:34.391939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:34.503414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:34.507647 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:34.560753 kubelet[2066]: E0912 17:33:34.560689 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:34.562895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:34.563061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:37.335086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:37.340385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:37.368942 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Sep 12 17:33:37.368964 systemd[1]: Reloading... Sep 12 17:33:37.468189 zram_generator::config[2140]: No configuration found. Sep 12 17:33:37.563408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:37.626821 systemd[1]: Reloading finished in 257 ms. Sep 12 17:33:37.672908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:37.677122 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:33:37.677381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:37.681396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:37.768019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:37.773451 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:33:37.819015 kubelet[2190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:37.820553 kubelet[2190]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:33:37.820553 kubelet[2190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:37.820553 kubelet[2190]: I0912 17:33:37.819567 2190 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:33:37.981653 kubelet[2190]: I0912 17:33:37.981620 2190 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:33:37.981799 kubelet[2190]: I0912 17:33:37.981790 2190 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:33:37.982069 kubelet[2190]: I0912 17:33:37.982057 2190 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:33:38.007341 kubelet[2190]: I0912 17:33:38.007305 2190 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:33:38.012139 kubelet[2190]: E0912 17:33:38.011804 2190 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://95.216.139.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.018807 kubelet[2190]: E0912 17:33:38.018777 2190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:33:38.018940 kubelet[2190]: I0912 17:33:38.018930 2190 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:33:38.023641 kubelet[2190]: I0912 17:33:38.023594 2190 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:33:38.025503 kubelet[2190]: I0912 17:33:38.025472 2190 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:33:38.025690 kubelet[2190]: I0912 17:33:38.025645 2190 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:33:38.025870 kubelet[2190]: I0912 17:33:38.025681 2190 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-c-e429241c3f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:33:38.025985 kubelet[2190]: I0912 17:33:38.025876 2190 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:33:38.025985 kubelet[2190]: I0912 17:33:38.025888 2190 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:33:38.026048 kubelet[2190]: I0912 17:33:38.026007 2190 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:38.029032 kubelet[2190]: I0912 17:33:38.028551 2190 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:33:38.029032 kubelet[2190]: I0912 17:33:38.028581 2190 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:33:38.029032 kubelet[2190]: I0912 17:33:38.028619 2190 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:33:38.029032 kubelet[2190]: I0912 17:33:38.028637 2190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:33:38.035335 kubelet[2190]: W0912 17:33:38.034742 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.216.139.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-c-e429241c3f&limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.035335 kubelet[2190]: E0912 17:33:38.034827 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.216.139.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-c-e429241c3f&limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.036592 kubelet[2190]: I0912 17:33:38.036357 2190 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:33:38.037826 kubelet[2190]: W0912 17:33:38.037785 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.216.139.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.037921 kubelet[2190]: E0912 17:33:38.037904 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.216.139.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.040929 kubelet[2190]: I0912 17:33:38.040902 2190 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:33:38.043174 kubelet[2190]: W0912 17:33:38.042166 2190 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:33:38.043174 kubelet[2190]: I0912 17:33:38.043039 2190 server.go:1274] "Started kubelet" Sep 12 17:33:38.045063 kubelet[2190]: I0912 17:33:38.044997 2190 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:33:38.046510 kubelet[2190]: I0912 17:33:38.046478 2190 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:33:38.050376 kubelet[2190]: I0912 17:33:38.049648 2190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:33:38.050376 kubelet[2190]: I0912 17:33:38.049933 2190 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:33:38.053191 kubelet[2190]: I0912 17:33:38.051309 2190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:33:38.053191 kubelet[2190]: E0912 17:33:38.050309 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://95.216.139.29:6443/api/v1/namespaces/default/events\": dial tcp 95.216.139.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-c-e429241c3f.1864996605b84b1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-c-e429241c3f,UID:ci-4081-3-6-c-e429241c3f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-c-e429241c3f,},FirstTimestamp:2025-09-12 17:33:38.043013919 +0000 UTC m=+0.265897429,LastTimestamp:2025-09-12 17:33:38.043013919 +0000 UTC m=+0.265897429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-c-e429241c3f,}" Sep 12 17:33:38.053191 kubelet[2190]: I0912 17:33:38.051724 2190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:33:38.053728 kubelet[2190]: I0912 17:33:38.053713 2190 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:33:38.055116 kubelet[2190]: I0912 17:33:38.055032 2190 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:33:38.060907 kubelet[2190]: I0912 17:33:38.060882 2190 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:33:38.061518 kubelet[2190]: W0912 17:33:38.061475 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.216.139.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.061659 kubelet[2190]: E0912 17:33:38.061636 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.216.139.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.062742 kubelet[2190]: I0912 17:33:38.062728 2190 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:33:38.062905 kubelet[2190]: I0912 17:33:38.062887 2190 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:33:38.064768 kubelet[2190]: E0912 17:33:38.064749 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:38.065399 kubelet[2190]: E0912 17:33:38.065373 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.216.139.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-c-e429241c3f?timeout=10s\": dial tcp 95.216.139.29:6443: connect: connection refused" interval="200ms" Sep 12 17:33:38.067382 kubelet[2190]: I0912 17:33:38.067355 2190 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:33:38.078729 kubelet[2190]: I0912 17:33:38.078670 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:33:38.079880 kubelet[2190]: I0912 17:33:38.079848 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:33:38.079880 kubelet[2190]: I0912 17:33:38.079883 2190 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:33:38.080018 kubelet[2190]: I0912 17:33:38.079911 2190 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:33:38.080018 kubelet[2190]: E0912 17:33:38.079959 2190 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:33:38.086118 kubelet[2190]: W0912 17:33:38.086076 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.216.139.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.086292 kubelet[2190]: E0912 17:33:38.086269 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.216.139.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.095647 kubelet[2190]: I0912 17:33:38.095612 2190 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:33:38.095771 kubelet[2190]: I0912 17:33:38.095694 2190 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:33:38.095771 kubelet[2190]: I0912 17:33:38.095713 2190 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:38.098223 kubelet[2190]: I0912 17:33:38.098194 2190 policy_none.go:49] "None policy: Start" Sep 12 17:33:38.098788 kubelet[2190]: I0912 17:33:38.098776 2190 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:33:38.099108 kubelet[2190]: I0912 17:33:38.098893 2190 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:33:38.108230 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:33:38.122622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:33:38.136883 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:33:38.138220 kubelet[2190]: I0912 17:33:38.138088 2190 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:33:38.138648 kubelet[2190]: I0912 17:33:38.138614 2190 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:33:38.138746 kubelet[2190]: I0912 17:33:38.138636 2190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:33:38.143919 kubelet[2190]: I0912 17:33:38.143753 2190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:33:38.145163 kubelet[2190]: E0912 17:33:38.145061 2190 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:38.192982 systemd[1]: Created slice kubepods-burstable-pod198327b94ac0feebaa62e74169e796db.slice - libcontainer container kubepods-burstable-pod198327b94ac0feebaa62e74169e796db.slice. Sep 12 17:33:38.217467 systemd[1]: Created slice kubepods-burstable-podf14dbdc012a8b3076cb7dbe87c402946.slice - libcontainer container kubepods-burstable-podf14dbdc012a8b3076cb7dbe87c402946.slice. Sep 12 17:33:38.233061 systemd[1]: Created slice kubepods-burstable-pod672b4b046bfa7e70ec13fdbc94ea3635.slice - libcontainer container kubepods-burstable-pod672b4b046bfa7e70ec13fdbc94ea3635.slice. Sep 12 17:33:38.241444 kubelet[2190]: I0912 17:33:38.241377 2190 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.241934 kubelet[2190]: E0912 17:33:38.241890 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.216.139.29:6443/api/v1/nodes\": dial tcp 95.216.139.29:6443: connect: connection refused" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262438 kubelet[2190]: I0912 17:33:38.262371 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262438 kubelet[2190]: I0912 17:33:38.262427 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262683 kubelet[2190]: I0912 17:33:38.262460 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262683 kubelet[2190]: I0912 17:33:38.262491 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262683 kubelet[2190]: I0912 17:33:38.262517 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262683 kubelet[2190]: I0912 17:33:38.262544 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262683 kubelet[2190]: I0912 17:33:38.262568 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262840 kubelet[2190]: I0912 17:33:38.262605 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.262840 kubelet[2190]: I0912 17:33:38.262628 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/672b4b046bfa7e70ec13fdbc94ea3635-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-c-e429241c3f\" (UID: \"672b4b046bfa7e70ec13fdbc94ea3635\") " pod="kube-system/kube-scheduler-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.266901 kubelet[2190]: E0912 17:33:38.266851 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.216.139.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-c-e429241c3f?timeout=10s\": dial tcp 95.216.139.29:6443: connect: connection refused" interval="400ms" Sep 12 17:33:38.444794 kubelet[2190]: I0912 17:33:38.444744 2190 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.445166 kubelet[2190]: E0912 17:33:38.445103 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.216.139.29:6443/api/v1/nodes\": dial tcp 95.216.139.29:6443: connect: connection refused" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.515013 containerd[1498]: time="2025-09-12T17:33:38.514895546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-c-e429241c3f,Uid:198327b94ac0feebaa62e74169e796db,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:38.526364 containerd[1498]: time="2025-09-12T17:33:38.526307536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-c-e429241c3f,Uid:f14dbdc012a8b3076cb7dbe87c402946,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:38.536466 containerd[1498]: time="2025-09-12T17:33:38.536420669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-c-e429241c3f,Uid:672b4b046bfa7e70ec13fdbc94ea3635,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:38.667800 kubelet[2190]: E0912 17:33:38.667733 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.216.139.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-c-e429241c3f?timeout=10s\": dial tcp 95.216.139.29:6443: connect: connection refused" interval="800ms" Sep 12 17:33:38.846465 kubelet[2190]: W0912 17:33:38.846231 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.216.139.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-c-e429241c3f&limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.846465 kubelet[2190]: E0912 17:33:38.846328 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.216.139.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-c-e429241c3f&limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:38.848368 kubelet[2190]: I0912 17:33:38.848323 2190 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.848809 kubelet[2190]: E0912 17:33:38.848756 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.216.139.29:6443/api/v1/nodes\": dial tcp 95.216.139.29:6443: connect: connection refused" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:38.984906 kubelet[2190]: W0912 17:33:38.984789 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.216.139.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:38.985743 kubelet[2190]: E0912 17:33:38.984858 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.216.139.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:39.003703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185326665.mount: Deactivated successfully. Sep 12 17:33:39.009505 containerd[1498]: time="2025-09-12T17:33:39.009459193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:39.010329 containerd[1498]: time="2025-09-12T17:33:39.010302516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:39.011172 containerd[1498]: time="2025-09-12T17:33:39.011121850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:33:39.011710 containerd[1498]: time="2025-09-12T17:33:39.011660662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 12 17:33:39.012249 containerd[1498]: time="2025-09-12T17:33:39.012178450Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:39.014536 containerd[1498]: time="2025-09-12T17:33:39.014494443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:33:39.016952 containerd[1498]: time="2025-09-12T17:33:39.016783772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.286004ms" Sep 12 17:33:39.019739 containerd[1498]: time="2025-09-12T17:33:39.019047337Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:39.021385 containerd[1498]: time="2025-09-12T17:33:39.020880564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.481231ms" Sep 12 17:33:39.021753 containerd[1498]: time="2025-09-12T17:33:39.021682533Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.708135ms" Sep 12 17:33:39.022140 containerd[1498]: time="2025-09-12T17:33:39.022106981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:33:39.080421 kubelet[2190]: W0912 17:33:39.077593 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.216.139.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:39.080421 kubelet[2190]: E0912 17:33:39.077668 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.216.139.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:39.148086 containerd[1498]: time="2025-09-12T17:33:39.148031921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:39.148752 containerd[1498]: time="2025-09-12T17:33:39.148582116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:39.148752 containerd[1498]: time="2025-09-12T17:33:39.148597699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.148752 containerd[1498]: time="2025-09-12T17:33:39.148680357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.153948 containerd[1498]: time="2025-09-12T17:33:39.153770930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:39.153948 containerd[1498]: time="2025-09-12T17:33:39.153818297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:39.153948 containerd[1498]: time="2025-09-12T17:33:39.153831534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.153948 containerd[1498]: time="2025-09-12T17:33:39.153899534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.157465 containerd[1498]: time="2025-09-12T17:33:39.157175527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:39.158251 containerd[1498]: time="2025-09-12T17:33:39.157213315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:39.158251 containerd[1498]: time="2025-09-12T17:33:39.158215744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.160306 containerd[1498]: time="2025-09-12T17:33:39.160002495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:39.174374 systemd[1]: Started cri-containerd-32dfd036b01e011d3ae1489fc70159d050f99475b9216b9765bc2cbc4faf415d.scope - libcontainer container 32dfd036b01e011d3ae1489fc70159d050f99475b9216b9765bc2cbc4faf415d. Sep 12 17:33:39.189435 systemd[1]: Started cri-containerd-334046b0342375ae452ff3deb6cd4f25e1705d319bb2022b0053bf3e44075e47.scope - libcontainer container 334046b0342375ae452ff3deb6cd4f25e1705d319bb2022b0053bf3e44075e47. Sep 12 17:33:39.190550 systemd[1]: Started cri-containerd-591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0.scope - libcontainer container 591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0. Sep 12 17:33:39.226478 containerd[1498]: time="2025-09-12T17:33:39.226441140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-c-e429241c3f,Uid:672b4b046bfa7e70ec13fdbc94ea3635,Namespace:kube-system,Attempt:0,} returns sandbox id \"32dfd036b01e011d3ae1489fc70159d050f99475b9216b9765bc2cbc4faf415d\"" Sep 12 17:33:39.230137 kubelet[2190]: W0912 17:33:39.229816 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.216.139.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.216.139.29:6443: connect: connection refused Sep 12 17:33:39.233603 kubelet[2190]: E0912 17:33:39.231233 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.216.139.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.216.139.29:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:33:39.233685 containerd[1498]: time="2025-09-12T17:33:39.233653188Z" level=info msg="CreateContainer within sandbox \"32dfd036b01e011d3ae1489fc70159d050f99475b9216b9765bc2cbc4faf415d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:33:39.252344 containerd[1498]: time="2025-09-12T17:33:39.252298873Z" level=info msg="CreateContainer within sandbox \"32dfd036b01e011d3ae1489fc70159d050f99475b9216b9765bc2cbc4faf415d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"279decf1d2abcc5e707c69fa63317c022fd479844f699ebf0dfcded5dd10733e\"" Sep 12 17:33:39.255869 containerd[1498]: time="2025-09-12T17:33:39.255849229Z" level=info msg="StartContainer for \"279decf1d2abcc5e707c69fa63317c022fd479844f699ebf0dfcded5dd10733e\"" Sep 12 17:33:39.258852 containerd[1498]: time="2025-09-12T17:33:39.258833948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-c-e429241c3f,Uid:198327b94ac0feebaa62e74169e796db,Namespace:kube-system,Attempt:0,} returns sandbox id \"334046b0342375ae452ff3deb6cd4f25e1705d319bb2022b0053bf3e44075e47\"" Sep 12 17:33:39.263876 containerd[1498]: time="2025-09-12T17:33:39.263857294Z" level=info msg="CreateContainer within sandbox \"334046b0342375ae452ff3deb6cd4f25e1705d319bb2022b0053bf3e44075e47\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:33:39.276688 containerd[1498]: time="2025-09-12T17:33:39.276584954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-c-e429241c3f,Uid:f14dbdc012a8b3076cb7dbe87c402946,Namespace:kube-system,Attempt:0,} returns sandbox id \"591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0\"" Sep 12 17:33:39.281080 containerd[1498]: time="2025-09-12T17:33:39.281049638Z" level=info msg="CreateContainer within sandbox \"591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:33:39.281968 containerd[1498]: time="2025-09-12T17:33:39.281935819Z" level=info msg="CreateContainer within sandbox \"334046b0342375ae452ff3deb6cd4f25e1705d319bb2022b0053bf3e44075e47\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f0d401cd479cbaac329d2c9b288363d0e86366cffa4058be662d2c061a1c072\"" Sep 12 17:33:39.282350 containerd[1498]: time="2025-09-12T17:33:39.282293600Z" level=info msg="StartContainer for \"4f0d401cd479cbaac329d2c9b288363d0e86366cffa4058be662d2c061a1c072\"" Sep 12 17:33:39.283130 systemd[1]: Started cri-containerd-279decf1d2abcc5e707c69fa63317c022fd479844f699ebf0dfcded5dd10733e.scope - libcontainer container 279decf1d2abcc5e707c69fa63317c022fd479844f699ebf0dfcded5dd10733e. Sep 12 17:33:39.308083 containerd[1498]: time="2025-09-12T17:33:39.308050367Z" level=info msg="CreateContainer within sandbox \"591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c\"" Sep 12 17:33:39.308983 containerd[1498]: time="2025-09-12T17:33:39.308841063Z" level=info msg="StartContainer for \"807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c\"" Sep 12 17:33:39.310280 systemd[1]: Started cri-containerd-4f0d401cd479cbaac329d2c9b288363d0e86366cffa4058be662d2c061a1c072.scope - libcontainer container 4f0d401cd479cbaac329d2c9b288363d0e86366cffa4058be662d2c061a1c072. Sep 12 17:33:39.338350 containerd[1498]: time="2025-09-12T17:33:39.338081189Z" level=info msg="StartContainer for \"279decf1d2abcc5e707c69fa63317c022fd479844f699ebf0dfcded5dd10733e\" returns successfully" Sep 12 17:33:39.341506 systemd[1]: Started cri-containerd-807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c.scope - libcontainer container 807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c. Sep 12 17:33:39.361745 containerd[1498]: time="2025-09-12T17:33:39.361374551Z" level=info msg="StartContainer for \"4f0d401cd479cbaac329d2c9b288363d0e86366cffa4058be662d2c061a1c072\" returns successfully" Sep 12 17:33:39.384191 containerd[1498]: time="2025-09-12T17:33:39.384138232Z" level=info msg="StartContainer for \"807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c\" returns successfully" Sep 12 17:33:39.468281 kubelet[2190]: E0912 17:33:39.468137 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.216.139.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-c-e429241c3f?timeout=10s\": dial tcp 95.216.139.29:6443: connect: connection refused" interval="1.6s" Sep 12 17:33:39.651356 kubelet[2190]: I0912 17:33:39.650721 2190 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:40.814660 kubelet[2190]: I0912 17:33:40.814623 2190 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:40.814660 kubelet[2190]: E0912 17:33:40.814660 2190 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-c-e429241c3f\": node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:40.838570 kubelet[2190]: E0912 17:33:40.838540 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:40.939264 kubelet[2190]: E0912 17:33:40.939195 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.039967 kubelet[2190]: E0912 17:33:41.039891 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.141100 kubelet[2190]: E0912 17:33:41.141034 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.242286 kubelet[2190]: E0912 17:33:41.242230 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.343321 kubelet[2190]: E0912 17:33:41.343230 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.444259 kubelet[2190]: E0912 17:33:41.444051 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.544814 kubelet[2190]: E0912 17:33:41.544737 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.645857 kubelet[2190]: E0912 17:33:41.645785 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.747031 kubelet[2190]: E0912 17:33:41.746753 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.847888 kubelet[2190]: E0912 17:33:41.847812 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:41.948923 kubelet[2190]: E0912 17:33:41.948675 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:42.037871 kubelet[2190]: I0912 17:33:42.037395 2190 apiserver.go:52] "Watching apiserver" Sep 12 17:33:42.055708 kubelet[2190]: I0912 17:33:42.055634 2190 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:33:42.722382 systemd[1]: Reloading requested from client PID 2458 ('systemctl') (unit session-7.scope)... Sep 12 17:33:42.722399 systemd[1]: Reloading... Sep 12 17:33:42.816180 zram_generator::config[2501]: No configuration found. Sep 12 17:33:42.885257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:42.952670 systemd[1]: Reloading finished in 230 ms. Sep 12 17:33:42.995331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:43.008181 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:33:43.008473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:43.017419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:43.139587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:43.148412 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:33:43.219543 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:43.219543 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:33:43.219543 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:33:43.219873 kubelet[2549]: I0912 17:33:43.219594 2549 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:33:43.226163 kubelet[2549]: I0912 17:33:43.225363 2549 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:33:43.226163 kubelet[2549]: I0912 17:33:43.225381 2549 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:33:43.226163 kubelet[2549]: I0912 17:33:43.225534 2549 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:33:43.226618 kubelet[2549]: I0912 17:33:43.226605 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:33:43.227999 kubelet[2549]: I0912 17:33:43.227982 2549 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:33:43.235285 kubelet[2549]: E0912 17:33:43.235229 2549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:33:43.235285 kubelet[2549]: I0912 17:33:43.235258 2549 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:33:43.237551 kubelet[2549]: I0912 17:33:43.237540 2549 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:33:43.237657 kubelet[2549]: I0912 17:33:43.237636 2549 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:33:43.237775 kubelet[2549]: I0912 17:33:43.237748 2549 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:33:43.237919 kubelet[2549]: I0912 17:33:43.237771 2549 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-c-e429241c3f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:33:43.240639 kubelet[2549]: I0912 17:33:43.240614 2549 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:33:43.240639 kubelet[2549]: I0912 17:33:43.240634 2549 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:33:43.240695 kubelet[2549]: I0912 17:33:43.240663 2549 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:43.241443 kubelet[2549]: I0912 17:33:43.240772 2549 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:33:43.241443 kubelet[2549]: I0912 17:33:43.240792 2549 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:33:43.241443 kubelet[2549]: I0912 17:33:43.240825 2549 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:33:43.241443 kubelet[2549]: I0912 17:33:43.240838 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:33:43.243747 kubelet[2549]: I0912 17:33:43.242287 2549 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:33:43.244078 kubelet[2549]: I0912 17:33:43.244064 2549 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:33:43.244494 kubelet[2549]: I0912 17:33:43.244462 2549 server.go:1274] "Started kubelet" Sep 12 17:33:43.247944 kubelet[2549]: I0912 17:33:43.247804 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:33:43.258656 kubelet[2549]: I0912 17:33:43.258622 2549 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:33:43.259784 kubelet[2549]: E0912 17:33:43.258829 2549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-c-e429241c3f\" not found" Sep 12 17:33:43.259784 kubelet[2549]: I0912 17:33:43.259662 2549 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:33:43.260805 kubelet[2549]: I0912 17:33:43.260785 2549 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:33:43.260920 kubelet[2549]: I0912 17:33:43.260904 2549 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:33:43.261350 kubelet[2549]: I0912 17:33:43.261335 2549 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:33:43.262946 kubelet[2549]: I0912 17:33:43.262857 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:33:43.263365 kubelet[2549]: I0912 17:33:43.263352 2549 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:33:43.263496 kubelet[2549]: I0912 17:33:43.263468 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:33:43.263644 kubelet[2549]: I0912 17:33:43.263632 2549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:33:43.265031 kubelet[2549]: I0912 17:33:43.265009 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:33:43.265272 kubelet[2549]: I0912 17:33:43.265256 2549 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:33:43.265313 kubelet[2549]: I0912 17:33:43.265279 2549 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:33:43.265340 kubelet[2549]: E0912 17:33:43.265316 2549 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:33:43.265695 kubelet[2549]: I0912 17:33:43.265681 2549 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:33:43.266292 kubelet[2549]: I0912 17:33:43.266270 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:33:43.274639 kubelet[2549]: I0912 17:33:43.274623 2549 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:33:43.279601 kubelet[2549]: E0912 17:33:43.279578 2549 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:33:43.302434 kubelet[2549]: I0912 17:33:43.302384 2549 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:33:43.302579 kubelet[2549]: I0912 17:33:43.302568 2549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:33:43.302626 kubelet[2549]: I0912 17:33:43.302620 2549 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:33:43.302774 kubelet[2549]: I0912 17:33:43.302762 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:33:43.302837 kubelet[2549]: I0912 17:33:43.302816 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:33:43.302885 kubelet[2549]: I0912 17:33:43.302879 2549 policy_none.go:49] "None policy: Start" Sep 12 17:33:43.303865 kubelet[2549]: I0912 17:33:43.303327 2549 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:33:43.303865 kubelet[2549]: I0912 17:33:43.303343 2549 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:33:43.303865 kubelet[2549]: I0912 17:33:43.303463 2549 state_mem.go:75] "Updated machine memory state" Sep 12 17:33:43.306636 kubelet[2549]: I0912 17:33:43.306624 2549 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:33:43.307047 kubelet[2549]: I0912 17:33:43.307036 2549 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:33:43.307758 kubelet[2549]: I0912 17:33:43.307731 2549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:33:43.308813 kubelet[2549]: I0912 17:33:43.308803 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:33:43.372854 kubelet[2549]: E0912 17:33:43.372811 2549 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.411572 kubelet[2549]: I0912 17:33:43.411544 2549 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.420066 kubelet[2549]: I0912 17:33:43.420038 2549 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.420216 kubelet[2549]: I0912 17:33:43.420204 2549 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.562331 kubelet[2549]: I0912 17:33:43.562111 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.562331 kubelet[2549]: I0912 17:33:43.562173 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.562331 kubelet[2549]: I0912 17:33:43.562205 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/672b4b046bfa7e70ec13fdbc94ea3635-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-c-e429241c3f\" (UID: \"672b4b046bfa7e70ec13fdbc94ea3635\") " pod="kube-system/kube-scheduler-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.562331 kubelet[2549]: I0912 17:33:43.562226 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.562331 kubelet[2549]: I0912 17:33:43.562246 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.563310 kubelet[2549]: I0912 17:33:43.562266 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.563396 kubelet[2549]: I0912 17:33:43.563339 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.563396 kubelet[2549]: I0912 17:33:43.563364 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f14dbdc012a8b3076cb7dbe87c402946-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-c-e429241c3f\" (UID: \"f14dbdc012a8b3076cb7dbe87c402946\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:43.563396 kubelet[2549]: I0912 17:33:43.563381 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/198327b94ac0feebaa62e74169e796db-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" (UID: \"198327b94ac0feebaa62e74169e796db\") " pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:44.242631 kubelet[2549]: I0912 17:33:44.242226 2549 apiserver.go:52] "Watching apiserver" Sep 12 17:33:44.262672 kubelet[2549]: I0912 17:33:44.261686 2549 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:33:44.309507 kubelet[2549]: E0912 17:33:44.309419 2549 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-6-c-e429241c3f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" Sep 12 17:33:44.374322 kubelet[2549]: I0912 17:33:44.373719 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-c-e429241c3f" podStartSLOduration=1.3735329090000001 podStartE2EDuration="1.373532909s" podCreationTimestamp="2025-09-12 17:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:44.345883668 +0000 UTC m=+1.176722599" watchObservedRunningTime="2025-09-12 17:33:44.373532909 +0000 UTC m=+1.204371810" Sep 12 17:33:44.374721 kubelet[2549]: I0912 17:33:44.374614 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-c-e429241c3f" podStartSLOduration=2.374545329 podStartE2EDuration="2.374545329s" podCreationTimestamp="2025-09-12 17:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:44.370876605 +0000 UTC m=+1.201715506" watchObservedRunningTime="2025-09-12 17:33:44.374545329 +0000 UTC m=+1.205435001" Sep 12 17:33:44.419317 kubelet[2549]: I0912 17:33:44.419200 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-c-e429241c3f" podStartSLOduration=1.419178504 podStartE2EDuration="1.419178504s" podCreationTimestamp="2025-09-12 17:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:44.394340557 +0000 UTC m=+1.225179458" watchObservedRunningTime="2025-09-12 17:33:44.419178504 +0000 UTC m=+1.250017395" Sep 12 17:33:46.421125 update_engine[1478]: I20250912 17:33:46.420997 1478 update_attempter.cc:509] Updating boot flags... Sep 12 17:33:46.488233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2604) Sep 12 17:33:46.593208 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2603) Sep 12 17:33:46.631239 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2603) Sep 12 17:33:49.054504 kubelet[2549]: I0912 17:33:49.054456 2549 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:33:49.054909 containerd[1498]: time="2025-09-12T17:33:49.054818061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:33:49.055224 kubelet[2549]: I0912 17:33:49.055073 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:33:49.729245 systemd[1]: Created slice kubepods-besteffort-podf89109d0_60eb_4404_9a68_69d42dee1a67.slice - libcontainer container kubepods-besteffort-podf89109d0_60eb_4404_9a68_69d42dee1a67.slice. Sep 12 17:33:49.807495 kubelet[2549]: I0912 17:33:49.807372 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f89109d0-60eb-4404-9a68-69d42dee1a67-xtables-lock\") pod \"kube-proxy-vtlr5\" (UID: \"f89109d0-60eb-4404-9a68-69d42dee1a67\") " pod="kube-system/kube-proxy-vtlr5" Sep 12 17:33:49.807808 kubelet[2549]: I0912 17:33:49.807453 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f89109d0-60eb-4404-9a68-69d42dee1a67-kube-proxy\") pod \"kube-proxy-vtlr5\" (UID: \"f89109d0-60eb-4404-9a68-69d42dee1a67\") " pod="kube-system/kube-proxy-vtlr5" Sep 12 17:33:49.807808 kubelet[2549]: I0912 17:33:49.807685 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f89109d0-60eb-4404-9a68-69d42dee1a67-lib-modules\") pod \"kube-proxy-vtlr5\" (UID: \"f89109d0-60eb-4404-9a68-69d42dee1a67\") " pod="kube-system/kube-proxy-vtlr5" Sep 12 17:33:49.809803 kubelet[2549]: I0912 17:33:49.809717 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbsw\" (UniqueName: \"kubernetes.io/projected/f89109d0-60eb-4404-9a68-69d42dee1a67-kube-api-access-tvbsw\") pod \"kube-proxy-vtlr5\" (UID: \"f89109d0-60eb-4404-9a68-69d42dee1a67\") " pod="kube-system/kube-proxy-vtlr5" Sep 12 17:33:50.037014 containerd[1498]: time="2025-09-12T17:33:50.036650754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtlr5,Uid:f89109d0-60eb-4404-9a68-69d42dee1a67,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:50.057768 containerd[1498]: time="2025-09-12T17:33:50.057674300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:50.058556 containerd[1498]: time="2025-09-12T17:33:50.057748126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:50.058556 containerd[1498]: time="2025-09-12T17:33:50.057773476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:50.058556 containerd[1498]: time="2025-09-12T17:33:50.057846239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:50.083731 systemd[1]: Started cri-containerd-c342b716152acfb7b0fd1769d9131c0cd622e288bcf8c3e7f2c160315a07a163.scope - libcontainer container c342b716152acfb7b0fd1769d9131c0cd622e288bcf8c3e7f2c160315a07a163. Sep 12 17:33:50.107599 containerd[1498]: time="2025-09-12T17:33:50.107553274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtlr5,Uid:f89109d0-60eb-4404-9a68-69d42dee1a67,Namespace:kube-system,Attempt:0,} returns sandbox id \"c342b716152acfb7b0fd1769d9131c0cd622e288bcf8c3e7f2c160315a07a163\"" Sep 12 17:33:50.111631 containerd[1498]: time="2025-09-12T17:33:50.111102773Z" level=info msg="CreateContainer within sandbox \"c342b716152acfb7b0fd1769d9131c0cd622e288bcf8c3e7f2c160315a07a163\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:33:50.126446 containerd[1498]: time="2025-09-12T17:33:50.126406854Z" level=info msg="CreateContainer within sandbox \"c342b716152acfb7b0fd1769d9131c0cd622e288bcf8c3e7f2c160315a07a163\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00a99a22e5a69f07b0045322bca68b50f66627c14d8855c6efecdde0d3f3272b\"" Sep 12 17:33:50.129176 containerd[1498]: time="2025-09-12T17:33:50.128503447Z" level=info msg="StartContainer for \"00a99a22e5a69f07b0045322bca68b50f66627c14d8855c6efecdde0d3f3272b\"" Sep 12 17:33:50.161601 systemd[1]: Started cri-containerd-00a99a22e5a69f07b0045322bca68b50f66627c14d8855c6efecdde0d3f3272b.scope - libcontainer container 00a99a22e5a69f07b0045322bca68b50f66627c14d8855c6efecdde0d3f3272b. Sep 12 17:33:50.171609 systemd[1]: Created slice kubepods-besteffort-podfda03ae4_a905_469a_8ac4_7f32d262d53c.slice - libcontainer container kubepods-besteffort-podfda03ae4_a905_469a_8ac4_7f32d262d53c.slice. Sep 12 17:33:50.199976 containerd[1498]: time="2025-09-12T17:33:50.199945763Z" level=info msg="StartContainer for \"00a99a22e5a69f07b0045322bca68b50f66627c14d8855c6efecdde0d3f3272b\" returns successfully" Sep 12 17:33:50.211288 kubelet[2549]: I0912 17:33:50.211237 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fda03ae4-a905-469a-8ac4-7f32d262d53c-var-lib-calico\") pod \"tigera-operator-58fc44c59b-2d25c\" (UID: \"fda03ae4-a905-469a-8ac4-7f32d262d53c\") " pod="tigera-operator/tigera-operator-58fc44c59b-2d25c" Sep 12 17:33:50.211288 kubelet[2549]: I0912 17:33:50.211292 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d977\" (UniqueName: \"kubernetes.io/projected/fda03ae4-a905-469a-8ac4-7f32d262d53c-kube-api-access-2d977\") pod \"tigera-operator-58fc44c59b-2d25c\" (UID: \"fda03ae4-a905-469a-8ac4-7f32d262d53c\") " pod="tigera-operator/tigera-operator-58fc44c59b-2d25c" Sep 12 17:33:50.334853 kubelet[2549]: I0912 17:33:50.332289 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vtlr5" podStartSLOduration=1.33226823 podStartE2EDuration="1.33226823s" podCreationTimestamp="2025-09-12 17:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:50.320422095 +0000 UTC m=+7.151260986" watchObservedRunningTime="2025-09-12 17:33:50.33226823 +0000 UTC m=+7.163107121" Sep 12 17:33:50.475911 containerd[1498]: time="2025-09-12T17:33:50.475860626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2d25c,Uid:fda03ae4-a905-469a-8ac4-7f32d262d53c,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:33:50.499266 containerd[1498]: time="2025-09-12T17:33:50.497241057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:50.499266 containerd[1498]: time="2025-09-12T17:33:50.497350202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:50.499266 containerd[1498]: time="2025-09-12T17:33:50.497372596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:50.499266 containerd[1498]: time="2025-09-12T17:33:50.497607950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:50.517318 systemd[1]: Started cri-containerd-df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab.scope - libcontainer container df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab. Sep 12 17:33:50.554474 containerd[1498]: time="2025-09-12T17:33:50.554422157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2d25c,Uid:fda03ae4-a905-469a-8ac4-7f32d262d53c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab\"" Sep 12 17:33:50.556725 containerd[1498]: time="2025-09-12T17:33:50.556704496Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:33:50.925142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084615318.mount: Deactivated successfully. Sep 12 17:33:52.287693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864050496.mount: Deactivated successfully. Sep 12 17:33:52.653608 containerd[1498]: time="2025-09-12T17:33:52.653553392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:52.654466 containerd[1498]: time="2025-09-12T17:33:52.654347211Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:33:52.655436 containerd[1498]: time="2025-09-12T17:33:52.655209603Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:52.657124 containerd[1498]: time="2025-09-12T17:33:52.657095586Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:52.657684 containerd[1498]: time="2025-09-12T17:33:52.657650865Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.100920068s" Sep 12 17:33:52.657729 containerd[1498]: time="2025-09-12T17:33:52.657685543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:33:52.660681 containerd[1498]: time="2025-09-12T17:33:52.660651506Z" level=info msg="CreateContainer within sandbox \"df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:33:52.678111 containerd[1498]: time="2025-09-12T17:33:52.678017716Z" level=info msg="CreateContainer within sandbox \"df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad\"" Sep 12 17:33:52.679721 containerd[1498]: time="2025-09-12T17:33:52.679679437Z" level=info msg="StartContainer for \"346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad\"" Sep 12 17:33:52.713442 systemd[1]: Started cri-containerd-346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad.scope - libcontainer container 346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad. Sep 12 17:33:52.733388 containerd[1498]: time="2025-09-12T17:33:52.733344812Z" level=info msg="StartContainer for \"346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad\" returns successfully" Sep 12 17:33:53.338032 kubelet[2549]: I0912 17:33:53.337822 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-2d25c" podStartSLOduration=1.234853173 podStartE2EDuration="3.337804615s" podCreationTimestamp="2025-09-12 17:33:50 +0000 UTC" firstStartedPulling="2025-09-12 17:33:50.555916262 +0000 UTC m=+7.386755153" lastFinishedPulling="2025-09-12 17:33:52.658867704 +0000 UTC m=+9.489706595" observedRunningTime="2025-09-12 17:33:53.327191743 +0000 UTC m=+10.158030644" watchObservedRunningTime="2025-09-12 17:33:53.337804615 +0000 UTC m=+10.168643496" Sep 12 17:33:58.838022 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:59.015492 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:59.019550 systemd[1]: sshd@6-95.216.139.29:22-147.75.109.163:47892.service: Deactivated successfully. Sep 12 17:33:59.021817 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:33:59.022390 systemd[1]: session-7.scope: Consumed 5.037s CPU time, 140.0M memory peak, 0B memory swap peak. Sep 12 17:33:59.023579 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:33:59.024828 systemd-logind[1473]: Removed session 7. Sep 12 17:34:01.613107 systemd[1]: Created slice kubepods-besteffort-podf4d43669_309f_4a62_9188_d69d5f8321c6.slice - libcontainer container kubepods-besteffort-podf4d43669_309f_4a62_9188_d69d5f8321c6.slice. Sep 12 17:34:01.688387 kubelet[2549]: I0912 17:34:01.688346 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f4d43669-309f-4a62-9188-d69d5f8321c6-typha-certs\") pod \"calico-typha-7f6ffb8894-7d7st\" (UID: \"f4d43669-309f-4a62-9188-d69d5f8321c6\") " pod="calico-system/calico-typha-7f6ffb8894-7d7st" Sep 12 17:34:01.688387 kubelet[2549]: I0912 17:34:01.688389 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d43669-309f-4a62-9188-d69d5f8321c6-tigera-ca-bundle\") pod \"calico-typha-7f6ffb8894-7d7st\" (UID: \"f4d43669-309f-4a62-9188-d69d5f8321c6\") " pod="calico-system/calico-typha-7f6ffb8894-7d7st" Sep 12 17:34:01.688750 kubelet[2549]: I0912 17:34:01.688405 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxqh\" (UniqueName: \"kubernetes.io/projected/f4d43669-309f-4a62-9188-d69d5f8321c6-kube-api-access-9pxqh\") pod \"calico-typha-7f6ffb8894-7d7st\" (UID: \"f4d43669-309f-4a62-9188-d69d5f8321c6\") " pod="calico-system/calico-typha-7f6ffb8894-7d7st" Sep 12 17:34:01.918816 containerd[1498]: time="2025-09-12T17:34:01.918460831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6ffb8894-7d7st,Uid:f4d43669-309f-4a62-9188-d69d5f8321c6,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:01.935634 systemd[1]: Created slice kubepods-besteffort-pod1ecbcd36_f602_4b27_a73f_38d983581cc0.slice - libcontainer container kubepods-besteffort-pod1ecbcd36_f602_4b27_a73f_38d983581cc0.slice. Sep 12 17:34:01.971574 containerd[1498]: time="2025-09-12T17:34:01.971471699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:01.972555 containerd[1498]: time="2025-09-12T17:34:01.972246399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:01.972620 containerd[1498]: time="2025-09-12T17:34:01.972576427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:01.974331 containerd[1498]: time="2025-09-12T17:34:01.974271990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:01.991483 kubelet[2549]: I0912 17:34:01.991142 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-flexvol-driver-host\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991483 kubelet[2549]: I0912 17:34:01.991196 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-lib-modules\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991483 kubelet[2549]: I0912 17:34:01.991224 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ecbcd36-f602-4b27-a73f-38d983581cc0-node-certs\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991483 kubelet[2549]: I0912 17:34:01.991238 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ecbcd36-f602-4b27-a73f-38d983581cc0-tigera-ca-bundle\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991483 kubelet[2549]: I0912 17:34:01.991252 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-cni-net-dir\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991661 kubelet[2549]: I0912 17:34:01.991266 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-policysync\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991661 kubelet[2549]: I0912 17:34:01.991283 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-cni-bin-dir\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991661 kubelet[2549]: I0912 17:34:01.991308 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-var-run-calico\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991661 kubelet[2549]: I0912 17:34:01.991331 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-var-lib-calico\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991661 kubelet[2549]: I0912 17:34:01.991359 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-cni-log-dir\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991753 kubelet[2549]: I0912 17:34:01.991382 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ecbcd36-f602-4b27-a73f-38d983581cc0-xtables-lock\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:01.991753 kubelet[2549]: I0912 17:34:01.991409 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727ph\" (UniqueName: \"kubernetes.io/projected/1ecbcd36-f602-4b27-a73f-38d983581cc0-kube-api-access-727ph\") pod \"calico-node-9l9b2\" (UID: \"1ecbcd36-f602-4b27-a73f-38d983581cc0\") " pod="calico-system/calico-node-9l9b2" Sep 12 17:34:02.022348 systemd[1]: Started cri-containerd-baefb13718f63d73666907e8d837fbff431174a080349dc885313433d5d2e7d9.scope - libcontainer container baefb13718f63d73666907e8d837fbff431174a080349dc885313433d5d2e7d9. Sep 12 17:34:02.078986 containerd[1498]: time="2025-09-12T17:34:02.077132250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6ffb8894-7d7st,Uid:f4d43669-309f-4a62-9188-d69d5f8321c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"baefb13718f63d73666907e8d837fbff431174a080349dc885313433d5d2e7d9\"" Sep 12 17:34:02.086495 containerd[1498]: time="2025-09-12T17:34:02.086471620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:34:02.102548 kubelet[2549]: E0912 17:34:02.102525 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.102674 kubelet[2549]: W0912 17:34:02.102662 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.102799 kubelet[2549]: E0912 17:34:02.102787 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.103954 kubelet[2549]: E0912 17:34:02.103943 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.104013 kubelet[2549]: W0912 17:34:02.104004 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.104059 kubelet[2549]: E0912 17:34:02.104051 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.104333 kubelet[2549]: E0912 17:34:02.104288 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.104917 kubelet[2549]: W0912 17:34:02.104905 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.104972 kubelet[2549]: E0912 17:34:02.104964 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.216292 kubelet[2549]: E0912 17:34:02.216136 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:02.245001 containerd[1498]: time="2025-09-12T17:34:02.244949609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9l9b2,Uid:1ecbcd36-f602-4b27-a73f-38d983581cc0,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:02.280978 kubelet[2549]: E0912 17:34:02.280940 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.281256 kubelet[2549]: W0912 17:34:02.280987 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.281256 kubelet[2549]: E0912 17:34:02.281012 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.282303 kubelet[2549]: E0912 17:34:02.281606 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.282303 kubelet[2549]: W0912 17:34:02.281615 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.282303 kubelet[2549]: E0912 17:34:02.281630 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.282303 kubelet[2549]: E0912 17:34:02.282101 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.282303 kubelet[2549]: W0912 17:34:02.282111 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.282303 kubelet[2549]: E0912 17:34:02.282123 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.282878 kubelet[2549]: E0912 17:34:02.282527 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.282878 kubelet[2549]: W0912 17:34:02.282536 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.282878 kubelet[2549]: E0912 17:34:02.282547 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.283682 kubelet[2549]: E0912 17:34:02.283652 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.283682 kubelet[2549]: W0912 17:34:02.283668 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.283682 kubelet[2549]: E0912 17:34:02.283679 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.283872 kubelet[2549]: E0912 17:34:02.283858 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.283872 kubelet[2549]: W0912 17:34:02.283870 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.283915 kubelet[2549]: E0912 17:34:02.283879 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.284035 kubelet[2549]: E0912 17:34:02.284003 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.284035 kubelet[2549]: W0912 17:34:02.284015 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.284035 kubelet[2549]: E0912 17:34:02.284022 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.284549 kubelet[2549]: E0912 17:34:02.284535 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.284549 kubelet[2549]: W0912 17:34:02.284548 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.284641 kubelet[2549]: E0912 17:34:02.284557 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.284779 kubelet[2549]: E0912 17:34:02.284709 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.284779 kubelet[2549]: W0912 17:34:02.284716 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.284779 kubelet[2549]: E0912 17:34:02.284723 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.285273 kubelet[2549]: E0912 17:34:02.285199 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.285892 kubelet[2549]: W0912 17:34:02.285874 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.286037 kubelet[2549]: E0912 17:34:02.285893 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.286037 kubelet[2549]: E0912 17:34:02.286029 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.286037 kubelet[2549]: W0912 17:34:02.286036 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.286121 kubelet[2549]: E0912 17:34:02.286044 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.286524 kubelet[2549]: E0912 17:34:02.286423 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.286524 kubelet[2549]: W0912 17:34:02.286433 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.286524 kubelet[2549]: E0912 17:34:02.286441 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.287005 kubelet[2549]: E0912 17:34:02.286988 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.287005 kubelet[2549]: W0912 17:34:02.287002 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.287074 kubelet[2549]: E0912 17:34:02.287010 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287402 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.287763 kubelet[2549]: W0912 17:34:02.287411 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287419 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287551 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.287763 kubelet[2549]: W0912 17:34:02.287558 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287565 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287681 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.287763 kubelet[2549]: W0912 17:34:02.287688 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.287763 kubelet[2549]: E0912 17:34:02.287694 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.287931 kubelet[2549]: E0912 17:34:02.287876 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.287931 kubelet[2549]: W0912 17:34:02.287884 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.287931 kubelet[2549]: E0912 17:34:02.287892 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.288127 kubelet[2549]: E0912 17:34:02.288117 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.288217 kubelet[2549]: W0912 17:34:02.288201 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.288302 kubelet[2549]: E0912 17:34:02.288292 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.288513 kubelet[2549]: E0912 17:34:02.288504 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.288581 kubelet[2549]: W0912 17:34:02.288572 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.288688 kubelet[2549]: E0912 17:34:02.288622 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.288828 kubelet[2549]: E0912 17:34:02.288819 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.288956 kubelet[2549]: W0912 17:34:02.288871 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.288956 kubelet[2549]: E0912 17:34:02.288884 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.293166 kubelet[2549]: E0912 17:34:02.293085 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.293166 kubelet[2549]: W0912 17:34:02.293096 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.293166 kubelet[2549]: E0912 17:34:02.293107 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.293166 kubelet[2549]: I0912 17:34:02.293136 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/262fff4b-d78b-430c-976d-43c3eb6a4adc-socket-dir\") pod \"csi-node-driver-j4tcw\" (UID: \"262fff4b-d78b-430c-976d-43c3eb6a4adc\") " pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:02.293466 kubelet[2549]: E0912 17:34:02.293312 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.293466 kubelet[2549]: W0912 17:34:02.293349 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.293466 kubelet[2549]: E0912 17:34:02.293357 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.293541 kubelet[2549]: E0912 17:34:02.293515 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.293541 kubelet[2549]: W0912 17:34:02.293523 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.293579 kubelet[2549]: E0912 17:34:02.293542 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.293770 kubelet[2549]: E0912 17:34:02.293678 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.293770 kubelet[2549]: W0912 17:34:02.293687 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.293770 kubelet[2549]: E0912 17:34:02.293694 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.293770 kubelet[2549]: I0912 17:34:02.293716 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/262fff4b-d78b-430c-976d-43c3eb6a4adc-registration-dir\") pod \"csi-node-driver-j4tcw\" (UID: \"262fff4b-d78b-430c-976d-43c3eb6a4adc\") " pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:02.293905 kubelet[2549]: E0912 17:34:02.293853 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.293905 kubelet[2549]: W0912 17:34:02.293861 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.293905 kubelet[2549]: E0912 17:34:02.293888 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.294725 kubelet[2549]: I0912 17:34:02.294048 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mhd4\" (UniqueName: \"kubernetes.io/projected/262fff4b-d78b-430c-976d-43c3eb6a4adc-kube-api-access-5mhd4\") pod \"csi-node-driver-j4tcw\" (UID: \"262fff4b-d78b-430c-976d-43c3eb6a4adc\") " pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:02.294725 kubelet[2549]: E0912 17:34:02.294190 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.294725 kubelet[2549]: W0912 17:34:02.294200 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.294725 kubelet[2549]: E0912 17:34:02.294219 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.294725 kubelet[2549]: E0912 17:34:02.294415 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.294725 kubelet[2549]: W0912 17:34:02.294422 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.294725 kubelet[2549]: E0912 17:34:02.294437 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.294725 kubelet[2549]: E0912 17:34:02.294646 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.294725 kubelet[2549]: W0912 17:34:02.294655 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.294877 kubelet[2549]: E0912 17:34:02.294665 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.294877 kubelet[2549]: I0912 17:34:02.294788 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262fff4b-d78b-430c-976d-43c3eb6a4adc-kubelet-dir\") pod \"csi-node-driver-j4tcw\" (UID: \"262fff4b-d78b-430c-976d-43c3eb6a4adc\") " pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:02.294947 kubelet[2549]: E0912 17:34:02.294915 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.295171 containerd[1498]: time="2025-09-12T17:34:02.294560268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:02.295171 containerd[1498]: time="2025-09-12T17:34:02.294609864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:02.295171 containerd[1498]: time="2025-09-12T17:34:02.294622408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:02.295349 kubelet[2549]: W0912 17:34:02.294955 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.295349 kubelet[2549]: E0912 17:34:02.294971 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.295349 kubelet[2549]: E0912 17:34:02.295245 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.295349 kubelet[2549]: W0912 17:34:02.295253 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.295349 kubelet[2549]: E0912 17:34:02.295269 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.295551 kubelet[2549]: E0912 17:34:02.295466 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.295551 kubelet[2549]: W0912 17:34:02.295475 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.295551 kubelet[2549]: E0912 17:34:02.295493 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.295551 kubelet[2549]: I0912 17:34:02.295513 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/262fff4b-d78b-430c-976d-43c3eb6a4adc-varrun\") pod \"csi-node-driver-j4tcw\" (UID: \"262fff4b-d78b-430c-976d-43c3eb6a4adc\") " pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:02.295791 kubelet[2549]: E0912 17:34:02.295709 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.295791 kubelet[2549]: W0912 17:34:02.295718 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.295791 kubelet[2549]: E0912 17:34:02.295728 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.295943 kubelet[2549]: E0912 17:34:02.295858 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.295943 kubelet[2549]: W0912 17:34:02.295867 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.295943 kubelet[2549]: E0912 17:34:02.295873 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.296005 kubelet[2549]: E0912 17:34:02.296000 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.296187 kubelet[2549]: W0912 17:34:02.296007 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.296187 kubelet[2549]: E0912 17:34:02.296015 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.296187 kubelet[2549]: E0912 17:34:02.296129 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.296187 kubelet[2549]: W0912 17:34:02.296135 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.296187 kubelet[2549]: E0912 17:34:02.296141 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.296479 containerd[1498]: time="2025-09-12T17:34:02.296347986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:02.312266 systemd[1]: Started cri-containerd-a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0.scope - libcontainer container a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0. Sep 12 17:34:02.329896 containerd[1498]: time="2025-09-12T17:34:02.329785437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9l9b2,Uid:1ecbcd36-f602-4b27-a73f-38d983581cc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\"" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.397536 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.398994 kubelet[2549]: W0912 17:34:02.397618 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.397646 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.398048 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.398994 kubelet[2549]: W0912 17:34:02.398063 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.398344 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.398436 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.398994 kubelet[2549]: W0912 17:34:02.398446 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.398460 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.398994 kubelet[2549]: E0912 17:34:02.398754 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.399441 kubelet[2549]: W0912 17:34:02.398767 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.399441 kubelet[2549]: E0912 17:34:02.398906 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.399441 kubelet[2549]: E0912 17:34:02.399300 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.399441 kubelet[2549]: W0912 17:34:02.399309 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.399441 kubelet[2549]: E0912 17:34:02.399428 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.400063 kubelet[2549]: E0912 17:34:02.399620 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.400063 kubelet[2549]: W0912 17:34:02.399630 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.400063 kubelet[2549]: E0912 17:34:02.399643 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.400063 kubelet[2549]: E0912 17:34:02.399920 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.400063 kubelet[2549]: W0912 17:34:02.399928 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.400063 kubelet[2549]: E0912 17:34:02.399947 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.400345 kubelet[2549]: E0912 17:34:02.400132 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.400345 kubelet[2549]: W0912 17:34:02.400139 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.400345 kubelet[2549]: E0912 17:34:02.400268 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.400407 kubelet[2549]: E0912 17:34:02.400387 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.400407 kubelet[2549]: W0912 17:34:02.400396 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.400568 kubelet[2549]: E0912 17:34:02.400480 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.400611 kubelet[2549]: E0912 17:34:02.400585 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.400611 kubelet[2549]: W0912 17:34:02.400592 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.400611 kubelet[2549]: E0912 17:34:02.400606 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.400898 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402456 kubelet[2549]: W0912 17:34:02.400907 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.400918 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.401087 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402456 kubelet[2549]: W0912 17:34:02.401094 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.401107 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.401337 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402456 kubelet[2549]: W0912 17:34:02.401345 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.401357 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402456 kubelet[2549]: E0912 17:34:02.401574 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402840 kubelet[2549]: W0912 17:34:02.401582 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.401699 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.401988 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402840 kubelet[2549]: W0912 17:34:02.402004 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.402135 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.402355 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.402840 kubelet[2549]: W0912 17:34:02.402383 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.402471 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.402840 kubelet[2549]: E0912 17:34:02.402612 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.404745 kubelet[2549]: W0912 17:34:02.402863 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.402944 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.403102 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.404745 kubelet[2549]: W0912 17:34:02.403109 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.403127 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.403322 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.404745 kubelet[2549]: W0912 17:34:02.403329 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.403346 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.404745 kubelet[2549]: E0912 17:34:02.403484 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.404745 kubelet[2549]: W0912 17:34:02.403491 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403500 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403641 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.406205 kubelet[2549]: W0912 17:34:02.403648 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403665 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403825 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.406205 kubelet[2549]: W0912 17:34:02.403831 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403840 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403961 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.406205 kubelet[2549]: W0912 17:34:02.403968 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406205 kubelet[2549]: E0912 17:34:02.403974 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.406439 kubelet[2549]: E0912 17:34:02.404114 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.406439 kubelet[2549]: W0912 17:34:02.404121 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406439 kubelet[2549]: E0912 17:34:02.404131 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.406439 kubelet[2549]: E0912 17:34:02.404804 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.406439 kubelet[2549]: W0912 17:34:02.404812 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.406439 kubelet[2549]: E0912 17:34:02.404819 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:02.416964 kubelet[2549]: E0912 17:34:02.416387 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:02.416964 kubelet[2549]: W0912 17:34:02.416408 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:02.416964 kubelet[2549]: E0912 17:34:02.416435 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:03.796899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347777010.mount: Deactivated successfully. Sep 12 17:34:04.174221 containerd[1498]: time="2025-09-12T17:34:04.174165546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:04.175127 containerd[1498]: time="2025-09-12T17:34:04.175018541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:34:04.176704 containerd[1498]: time="2025-09-12T17:34:04.175826492Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:04.185787 containerd[1498]: time="2025-09-12T17:34:04.185748530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:04.186457 containerd[1498]: time="2025-09-12T17:34:04.186424565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.099810378s" Sep 12 17:34:04.186515 containerd[1498]: time="2025-09-12T17:34:04.186459722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:34:04.193672 containerd[1498]: time="2025-09-12T17:34:04.193633287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:34:04.228815 containerd[1498]: time="2025-09-12T17:34:04.228757114Z" level=info msg="CreateContainer within sandbox \"baefb13718f63d73666907e8d837fbff431174a080349dc885313433d5d2e7d9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:34:04.242440 containerd[1498]: time="2025-09-12T17:34:04.242392179Z" level=info msg="CreateContainer within sandbox \"baefb13718f63d73666907e8d837fbff431174a080349dc885313433d5d2e7d9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"53148c77face824f2b6293a80bf7f86bd660cd79249ec0ac6a59d3d920286ab0\"" Sep 12 17:34:04.245594 containerd[1498]: time="2025-09-12T17:34:04.244860653Z" level=info msg="StartContainer for \"53148c77face824f2b6293a80bf7f86bd660cd79249ec0ac6a59d3d920286ab0\"" Sep 12 17:34:04.270535 kubelet[2549]: E0912 17:34:04.270490 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:04.302265 systemd[1]: Started cri-containerd-53148c77face824f2b6293a80bf7f86bd660cd79249ec0ac6a59d3d920286ab0.scope - libcontainer container 53148c77face824f2b6293a80bf7f86bd660cd79249ec0ac6a59d3d920286ab0. Sep 12 17:34:04.345954 containerd[1498]: time="2025-09-12T17:34:04.345920561Z" level=info msg="StartContainer for \"53148c77face824f2b6293a80bf7f86bd660cd79249ec0ac6a59d3d920286ab0\" returns successfully" Sep 12 17:34:04.383566 kubelet[2549]: I0912 17:34:04.381660 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f6ffb8894-7d7st" podStartSLOduration=1.271397582 podStartE2EDuration="3.381644086s" podCreationTimestamp="2025-09-12 17:34:01 +0000 UTC" firstStartedPulling="2025-09-12 17:34:02.083248827 +0000 UTC m=+18.914087718" lastFinishedPulling="2025-09-12 17:34:04.193495341 +0000 UTC m=+21.024334222" observedRunningTime="2025-09-12 17:34:04.381585912 +0000 UTC m=+21.212424803" watchObservedRunningTime="2025-09-12 17:34:04.381644086 +0000 UTC m=+21.212482966" Sep 12 17:34:04.409716 kubelet[2549]: E0912 17:34:04.409677 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.409716 kubelet[2549]: W0912 17:34:04.409703 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.409716 kubelet[2549]: E0912 17:34:04.409722 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.409973 kubelet[2549]: E0912 17:34:04.409927 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.409973 kubelet[2549]: W0912 17:34:04.409942 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.409973 kubelet[2549]: E0912 17:34:04.409951 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.410157 kubelet[2549]: E0912 17:34:04.410121 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.410157 kubelet[2549]: W0912 17:34:04.410135 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.410203 kubelet[2549]: E0912 17:34:04.410165 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410313 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411376 kubelet[2549]: W0912 17:34:04.410337 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410343 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410637 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411376 kubelet[2549]: W0912 17:34:04.410645 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410652 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410823 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411376 kubelet[2549]: W0912 17:34:04.410831 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.410852 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411376 kubelet[2549]: E0912 17:34:04.411011 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411552 kubelet[2549]: W0912 17:34:04.411018 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411552 kubelet[2549]: E0912 17:34:04.411025 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411552 kubelet[2549]: E0912 17:34:04.411201 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411552 kubelet[2549]: W0912 17:34:04.411208 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411552 kubelet[2549]: E0912 17:34:04.411215 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411552 kubelet[2549]: E0912 17:34:04.411481 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411552 kubelet[2549]: W0912 17:34:04.411526 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411552 kubelet[2549]: E0912 17:34:04.411538 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.411815 kubelet[2549]: E0912 17:34:04.411791 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.411815 kubelet[2549]: W0912 17:34:04.411806 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.411815 kubelet[2549]: E0912 17:34:04.411815 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.412058 kubelet[2549]: E0912 17:34:04.412036 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.412058 kubelet[2549]: W0912 17:34:04.412050 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.412058 kubelet[2549]: E0912 17:34:04.412057 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.412244 kubelet[2549]: E0912 17:34:04.412223 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.412244 kubelet[2549]: W0912 17:34:04.412239 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.412293 kubelet[2549]: E0912 17:34:04.412246 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412415 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.413164 kubelet[2549]: W0912 17:34:04.412426 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412433 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412665 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.413164 kubelet[2549]: W0912 17:34:04.412673 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412682 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412819 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.413164 kubelet[2549]: W0912 17:34:04.412826 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.413164 kubelet[2549]: E0912 17:34:04.412833 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.420283 kubelet[2549]: E0912 17:34:04.420258 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.420283 kubelet[2549]: W0912 17:34:04.420276 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.420283 kubelet[2549]: E0912 17:34:04.420285 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.428089 kubelet[2549]: E0912 17:34:04.428016 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.428089 kubelet[2549]: W0912 17:34:04.428041 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.428089 kubelet[2549]: E0912 17:34:04.428065 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.428281 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429588 kubelet[2549]: W0912 17:34:04.428295 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.428311 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.428792 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429588 kubelet[2549]: W0912 17:34:04.428801 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.428851 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.429083 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429588 kubelet[2549]: W0912 17:34:04.429092 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.429143 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429588 kubelet[2549]: E0912 17:34:04.429344 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429796 kubelet[2549]: W0912 17:34:04.429351 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429796 kubelet[2549]: E0912 17:34:04.429375 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429796 kubelet[2549]: E0912 17:34:04.429561 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429796 kubelet[2549]: W0912 17:34:04.429569 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429796 kubelet[2549]: E0912 17:34:04.429579 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.429796 kubelet[2549]: E0912 17:34:04.429765 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.429796 kubelet[2549]: W0912 17:34:04.429773 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.429909 kubelet[2549]: E0912 17:34:04.429833 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.430416 kubelet[2549]: E0912 17:34:04.430302 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.430416 kubelet[2549]: W0912 17:34:04.430317 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.430416 kubelet[2549]: E0912 17:34:04.430400 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.430939 kubelet[2549]: E0912 17:34:04.430498 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.430939 kubelet[2549]: W0912 17:34:04.430519 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.430939 kubelet[2549]: E0912 17:34:04.430597 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.430939 kubelet[2549]: E0912 17:34:04.430723 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.430939 kubelet[2549]: W0912 17:34:04.430730 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.430939 kubelet[2549]: E0912 17:34:04.430757 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.430939 kubelet[2549]: E0912 17:34:04.430932 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.430939 kubelet[2549]: W0912 17:34:04.430940 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.431093 kubelet[2549]: E0912 17:34:04.430958 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431185 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432572 kubelet[2549]: W0912 17:34:04.431197 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431229 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431416 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432572 kubelet[2549]: W0912 17:34:04.431423 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431433 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431802 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432572 kubelet[2549]: W0912 17:34:04.431810 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.431873 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432572 kubelet[2549]: E0912 17:34:04.432048 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432761 kubelet[2549]: W0912 17:34:04.432060 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432761 kubelet[2549]: E0912 17:34:04.432070 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432761 kubelet[2549]: E0912 17:34:04.432280 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432761 kubelet[2549]: W0912 17:34:04.432312 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432761 kubelet[2549]: E0912 17:34:04.432320 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:04.432761 kubelet[2549]: E0912 17:34:04.432598 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:04.432761 kubelet[2549]: W0912 17:34:04.432606 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:04.432761 kubelet[2549]: E0912 17:34:04.432613 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.354692 kubelet[2549]: I0912 17:34:05.354648 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:05.419623 kubelet[2549]: E0912 17:34:05.419582 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.419623 kubelet[2549]: W0912 17:34:05.419605 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.419623 kubelet[2549]: E0912 17:34:05.419625 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.422809 kubelet[2549]: E0912 17:34:05.422781 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.422809 kubelet[2549]: W0912 17:34:05.422799 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.422809 kubelet[2549]: E0912 17:34:05.422813 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.423102 kubelet[2549]: E0912 17:34:05.423065 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.423102 kubelet[2549]: W0912 17:34:05.423090 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.423288 kubelet[2549]: E0912 17:34:05.423113 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.423411 kubelet[2549]: E0912 17:34:05.423382 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.423411 kubelet[2549]: W0912 17:34:05.423397 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.423411 kubelet[2549]: E0912 17:34:05.423410 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.423615 kubelet[2549]: E0912 17:34:05.423591 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.423615 kubelet[2549]: W0912 17:34:05.423605 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.423615 kubelet[2549]: E0912 17:34:05.423614 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.423772 kubelet[2549]: E0912 17:34:05.423747 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.423772 kubelet[2549]: W0912 17:34:05.423764 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.423772 kubelet[2549]: E0912 17:34:05.423772 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.423967 kubelet[2549]: E0912 17:34:05.423940 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.423967 kubelet[2549]: W0912 17:34:05.423957 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.423967 kubelet[2549]: E0912 17:34:05.423969 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.424206 kubelet[2549]: E0912 17:34:05.424178 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.424206 kubelet[2549]: W0912 17:34:05.424195 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.424206 kubelet[2549]: E0912 17:34:05.424206 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.424437 kubelet[2549]: E0912 17:34:05.424399 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.424437 kubelet[2549]: W0912 17:34:05.424435 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.424515 kubelet[2549]: E0912 17:34:05.424450 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.424672 kubelet[2549]: E0912 17:34:05.424647 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.424672 kubelet[2549]: W0912 17:34:05.424662 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.424750 kubelet[2549]: E0912 17:34:05.424672 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.424842 kubelet[2549]: E0912 17:34:05.424819 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.424842 kubelet[2549]: W0912 17:34:05.424832 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.424842 kubelet[2549]: E0912 17:34:05.424841 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.425040 kubelet[2549]: E0912 17:34:05.425012 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.425040 kubelet[2549]: W0912 17:34:05.425028 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.425040 kubelet[2549]: E0912 17:34:05.425038 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.425257 kubelet[2549]: E0912 17:34:05.425232 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.425257 kubelet[2549]: W0912 17:34:05.425247 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.425257 kubelet[2549]: E0912 17:34:05.425255 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.425461 kubelet[2549]: E0912 17:34:05.425434 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.425461 kubelet[2549]: W0912 17:34:05.425450 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.425461 kubelet[2549]: E0912 17:34:05.425460 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.425643 kubelet[2549]: E0912 17:34:05.425620 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.425643 kubelet[2549]: W0912 17:34:05.425633 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.425643 kubelet[2549]: E0912 17:34:05.425642 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.437103 kubelet[2549]: E0912 17:34:05.437076 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.437103 kubelet[2549]: W0912 17:34:05.437093 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.437103 kubelet[2549]: E0912 17:34:05.437104 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.437329 kubelet[2549]: E0912 17:34:05.437319 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.437329 kubelet[2549]: W0912 17:34:05.437329 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.437393 kubelet[2549]: E0912 17:34:05.437344 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.437583 kubelet[2549]: E0912 17:34:05.437562 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.437583 kubelet[2549]: W0912 17:34:05.437576 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.437703 kubelet[2549]: E0912 17:34:05.437591 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.437809 kubelet[2549]: E0912 17:34:05.437786 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.437809 kubelet[2549]: W0912 17:34:05.437803 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.437868 kubelet[2549]: E0912 17:34:05.437823 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.438047 kubelet[2549]: E0912 17:34:05.438008 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.438047 kubelet[2549]: W0912 17:34:05.438027 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.438047 kubelet[2549]: E0912 17:34:05.438042 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.438263 kubelet[2549]: E0912 17:34:05.438236 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.438263 kubelet[2549]: W0912 17:34:05.438255 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.438345 kubelet[2549]: E0912 17:34:05.438267 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.438535 kubelet[2549]: E0912 17:34:05.438508 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.438535 kubelet[2549]: W0912 17:34:05.438525 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.438634 kubelet[2549]: E0912 17:34:05.438606 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.438912 kubelet[2549]: E0912 17:34:05.438871 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.438912 kubelet[2549]: W0912 17:34:05.438884 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.439127 kubelet[2549]: E0912 17:34:05.439026 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.439127 kubelet[2549]: W0912 17:34:05.439037 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.439127 kubelet[2549]: E0912 17:34:05.439072 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.439127 kubelet[2549]: E0912 17:34:05.439104 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.439825 kubelet[2549]: E0912 17:34:05.439798 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.439825 kubelet[2549]: W0912 17:34:05.439813 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.439825 kubelet[2549]: E0912 17:34:05.439829 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.440100 kubelet[2549]: E0912 17:34:05.440045 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.440100 kubelet[2549]: W0912 17:34:05.440059 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.440363 kubelet[2549]: E0912 17:34:05.440240 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.440657 kubelet[2549]: E0912 17:34:05.440646 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.440799 kubelet[2549]: W0912 17:34:05.440698 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.440799 kubelet[2549]: E0912 17:34:05.440733 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.441119 kubelet[2549]: E0912 17:34:05.441095 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.441119 kubelet[2549]: W0912 17:34:05.441111 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.441228 kubelet[2549]: E0912 17:34:05.441126 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.441406 kubelet[2549]: E0912 17:34:05.441378 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.441406 kubelet[2549]: W0912 17:34:05.441398 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.441503 kubelet[2549]: E0912 17:34:05.441437 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.441695 kubelet[2549]: E0912 17:34:05.441669 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.441695 kubelet[2549]: W0912 17:34:05.441686 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.441774 kubelet[2549]: E0912 17:34:05.441709 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.442088 kubelet[2549]: E0912 17:34:05.442066 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.442088 kubelet[2549]: W0912 17:34:05.442080 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.442233 kubelet[2549]: E0912 17:34:05.442209 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.442383 kubelet[2549]: E0912 17:34:05.442357 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.442383 kubelet[2549]: W0912 17:34:05.442373 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.442476 kubelet[2549]: E0912 17:34:05.442387 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:05.442848 kubelet[2549]: E0912 17:34:05.442820 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:05.442848 kubelet[2549]: W0912 17:34:05.442836 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:05.442848 kubelet[2549]: E0912 17:34:05.442846 2549 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:06.254274 containerd[1498]: time="2025-09-12T17:34:06.254220928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.255311 containerd[1498]: time="2025-09-12T17:34:06.255166549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:34:06.257048 containerd[1498]: time="2025-09-12T17:34:06.255984104Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.258168 containerd[1498]: time="2025-09-12T17:34:06.257914373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.258857 containerd[1498]: time="2025-09-12T17:34:06.258393036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.064728106s" Sep 12 17:34:06.258857 containerd[1498]: time="2025-09-12T17:34:06.258422672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:34:06.261183 containerd[1498]: time="2025-09-12T17:34:06.261142993Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:34:06.266132 kubelet[2549]: E0912 17:34:06.266099 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:06.273434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447813220.mount: Deactivated successfully. Sep 12 17:34:06.275267 containerd[1498]: time="2025-09-12T17:34:06.274688057Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793\"" Sep 12 17:34:06.276178 containerd[1498]: time="2025-09-12T17:34:06.275136972Z" level=info msg="StartContainer for \"f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793\"" Sep 12 17:34:06.297879 systemd[1]: run-containerd-runc-k8s.io-f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793-runc.LOkMFx.mount: Deactivated successfully. Sep 12 17:34:06.309287 systemd[1]: Started cri-containerd-f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793.scope - libcontainer container f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793. Sep 12 17:34:06.334252 containerd[1498]: time="2025-09-12T17:34:06.333833772Z" level=info msg="StartContainer for \"f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793\" returns successfully" Sep 12 17:34:06.350753 systemd[1]: cri-containerd-f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793.scope: Deactivated successfully. Sep 12 17:34:06.432812 containerd[1498]: time="2025-09-12T17:34:06.426065367Z" level=info msg="shim disconnected" id=f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793 namespace=k8s.io Sep 12 17:34:06.432812 containerd[1498]: time="2025-09-12T17:34:06.432739766Z" level=warning msg="cleaning up after shim disconnected" id=f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793 namespace=k8s.io Sep 12 17:34:06.432812 containerd[1498]: time="2025-09-12T17:34:06.432788379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:07.270026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f24ac8a0e4d6fd3848592473bb9e4324df249ff754a91f41df10072377de8793-rootfs.mount: Deactivated successfully. Sep 12 17:34:07.370714 containerd[1498]: time="2025-09-12T17:34:07.370641117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:34:08.266180 kubelet[2549]: E0912 17:34:08.266070 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:10.015451 containerd[1498]: time="2025-09-12T17:34:10.015409202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:10.016361 containerd[1498]: time="2025-09-12T17:34:10.016278381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:34:10.017197 containerd[1498]: time="2025-09-12T17:34:10.017159073Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:10.018655 containerd[1498]: time="2025-09-12T17:34:10.018636631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:10.019510 containerd[1498]: time="2025-09-12T17:34:10.019126743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.648442241s" Sep 12 17:34:10.019510 containerd[1498]: time="2025-09-12T17:34:10.019166639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:34:10.020627 containerd[1498]: time="2025-09-12T17:34:10.020589713Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:34:10.035450 containerd[1498]: time="2025-09-12T17:34:10.035422194Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1\"" Sep 12 17:34:10.035847 containerd[1498]: time="2025-09-12T17:34:10.035819407Z" level=info msg="StartContainer for \"82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1\"" Sep 12 17:34:10.070813 systemd[1]: run-containerd-runc-k8s.io-82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1-runc.f9kpcT.mount: Deactivated successfully. Sep 12 17:34:10.079269 systemd[1]: Started cri-containerd-82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1.scope - libcontainer container 82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1. Sep 12 17:34:10.103519 containerd[1498]: time="2025-09-12T17:34:10.103487517Z" level=info msg="StartContainer for \"82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1\" returns successfully" Sep 12 17:34:10.267286 kubelet[2549]: E0912 17:34:10.266206 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:10.611968 systemd[1]: cri-containerd-82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1.scope: Deactivated successfully. Sep 12 17:34:10.662845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1-rootfs.mount: Deactivated successfully. Sep 12 17:34:10.685912 containerd[1498]: time="2025-09-12T17:34:10.684989246Z" level=info msg="shim disconnected" id=82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1 namespace=k8s.io Sep 12 17:34:10.685912 containerd[1498]: time="2025-09-12T17:34:10.685065713Z" level=warning msg="cleaning up after shim disconnected" id=82cca8fc0738bb05d4c74cfc84a9cca3532e1ea1356879334ce116f5526101d1 namespace=k8s.io Sep 12 17:34:10.685912 containerd[1498]: time="2025-09-12T17:34:10.685079239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:10.702805 kubelet[2549]: I0912 17:34:10.702769 2549 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:34:10.708804 containerd[1498]: time="2025-09-12T17:34:10.708760637Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:34:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:34:10.749309 kubelet[2549]: W0912 17:34:10.749271 2549 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-3-6-c-e429241c3f" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object Sep 12 17:34:10.749447 kubelet[2549]: E0912 17:34:10.749331 2549 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081-3-6-c-e429241c3f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object" logger="UnhandledError" Sep 12 17:34:10.752265 kubelet[2549]: W0912 17:34:10.750392 2549 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-c-e429241c3f" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object Sep 12 17:34:10.752265 kubelet[2549]: E0912 17:34:10.750425 2549 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-c-e429241c3f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object" logger="UnhandledError" Sep 12 17:34:10.752265 kubelet[2549]: W0912 17:34:10.750865 2549 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-6-c-e429241c3f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object Sep 12 17:34:10.752265 kubelet[2549]: E0912 17:34:10.750886 2549 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-6-c-e429241c3f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object" logger="UnhandledError" Sep 12 17:34:10.755364 systemd[1]: Created slice kubepods-burstable-pod2038863b_3df2_4d79_ad86_96d22113a91c.slice - libcontainer container kubepods-burstable-pod2038863b_3df2_4d79_ad86_96d22113a91c.slice. Sep 12 17:34:10.768074 systemd[1]: Created slice kubepods-besteffort-podda5d1853_6f02_4cda_8e7a_51c5e86f7848.slice - libcontainer container kubepods-besteffort-podda5d1853_6f02_4cda_8e7a_51c5e86f7848.slice. Sep 12 17:34:10.771677 kubelet[2549]: W0912 17:34:10.771661 2549 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081-3-6-c-e429241c3f" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object Sep 12 17:34:10.772225 kubelet[2549]: E0912 17:34:10.772209 2549 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-6-c-e429241c3f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object" logger="UnhandledError" Sep 12 17:34:10.772430 kubelet[2549]: W0912 17:34:10.771945 2549 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081-3-6-c-e429241c3f" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object Sep 12 17:34:10.772585 kubelet[2549]: E0912 17:34:10.772570 2549 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-6-c-e429241c3f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-c-e429241c3f' and this object" logger="UnhandledError" Sep 12 17:34:10.776192 systemd[1]: Created slice kubepods-besteffort-pod6b2d4555_77d4_4589_97dc_8cc68e252176.slice - libcontainer container kubepods-besteffort-pod6b2d4555_77d4_4589_97dc_8cc68e252176.slice. Sep 12 17:34:10.778205 kubelet[2549]: I0912 17:34:10.778141 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f203f5-5176-4742-9fe4-7f35ee43dc3d-config-volume\") pod \"coredns-7c65d6cfc9-hhxgk\" (UID: \"b5f203f5-5176-4742-9fe4-7f35ee43dc3d\") " pod="kube-system/coredns-7c65d6cfc9-hhxgk" Sep 12 17:34:10.778879 kubelet[2549]: I0912 17:34:10.778862 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmxm\" (UniqueName: \"kubernetes.io/projected/b5f203f5-5176-4742-9fe4-7f35ee43dc3d-kube-api-access-rqmxm\") pod \"coredns-7c65d6cfc9-hhxgk\" (UID: \"b5f203f5-5176-4742-9fe4-7f35ee43dc3d\") " pod="kube-system/coredns-7c65d6cfc9-hhxgk" Sep 12 17:34:10.778952 kubelet[2549]: I0912 17:34:10.778942 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpf8\" (UniqueName: \"kubernetes.io/projected/2038863b-3df2-4d79-ad86-96d22113a91c-kube-api-access-2tpf8\") pod \"coredns-7c65d6cfc9-n262w\" (UID: \"2038863b-3df2-4d79-ad86-96d22113a91c\") " pod="kube-system/coredns-7c65d6cfc9-n262w" Sep 12 17:34:10.779008 kubelet[2549]: I0912 17:34:10.778999 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da5d1853-6f02-4cda-8e7a-51c5e86f7848-calico-apiserver-certs\") pod \"calico-apiserver-65c8b64669-gr9jn\" (UID: \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\") " pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" Sep 12 17:34:10.779091 kubelet[2549]: I0912 17:34:10.779081 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2d4555-77d4-4589-97dc-8cc68e252176-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gbp7h\" (UID: \"6b2d4555-77d4-4589-97dc-8cc68e252176\") " pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:10.779164 kubelet[2549]: I0912 17:34:10.779136 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h658x\" (UniqueName: \"kubernetes.io/projected/6b2d4555-77d4-4589-97dc-8cc68e252176-kube-api-access-h658x\") pod \"goldmane-7988f88666-gbp7h\" (UID: \"6b2d4555-77d4-4589-97dc-8cc68e252176\") " pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:10.779235 kubelet[2549]: I0912 17:34:10.779225 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-ca-bundle\") pod \"whisker-5c8965db89-kpnzw\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " pod="calico-system/whisker-5c8965db89-kpnzw" Sep 12 17:34:10.779292 kubelet[2549]: I0912 17:34:10.779282 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2038863b-3df2-4d79-ad86-96d22113a91c-config-volume\") pod \"coredns-7c65d6cfc9-n262w\" (UID: \"2038863b-3df2-4d79-ad86-96d22113a91c\") " pod="kube-system/coredns-7c65d6cfc9-n262w" Sep 12 17:34:10.779922 kubelet[2549]: I0912 17:34:10.779910 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxt25\" (UniqueName: \"kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25\") pod \"calico-apiserver-65c8b64669-gr9jn\" (UID: \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\") " pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" Sep 12 17:34:10.781210 kubelet[2549]: I0912 17:34:10.779995 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2d4555-77d4-4589-97dc-8cc68e252176-config\") pod \"goldmane-7988f88666-gbp7h\" (UID: \"6b2d4555-77d4-4589-97dc-8cc68e252176\") " pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:10.781210 kubelet[2549]: I0912 17:34:10.780014 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair\") pod \"whisker-5c8965db89-kpnzw\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " pod="calico-system/whisker-5c8965db89-kpnzw" Sep 12 17:34:10.781210 kubelet[2549]: I0912 17:34:10.780028 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6b2d4555-77d4-4589-97dc-8cc68e252176-goldmane-key-pair\") pod \"goldmane-7988f88666-gbp7h\" (UID: \"6b2d4555-77d4-4589-97dc-8cc68e252176\") " pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:10.781210 kubelet[2549]: I0912 17:34:10.780042 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnd6\" (UniqueName: \"kubernetes.io/projected/76282f87-2b2c-4d09-a68d-b7b95e99f824-kube-api-access-fnnd6\") pod \"whisker-5c8965db89-kpnzw\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " pod="calico-system/whisker-5c8965db89-kpnzw" Sep 12 17:34:10.783749 systemd[1]: Created slice kubepods-burstable-podb5f203f5_5176_4742_9fe4_7f35ee43dc3d.slice - libcontainer container kubepods-burstable-podb5f203f5_5176_4742_9fe4_7f35ee43dc3d.slice. Sep 12 17:34:10.790238 systemd[1]: Created slice kubepods-besteffort-pod76282f87_2b2c_4d09_a68d_b7b95e99f824.slice - libcontainer container kubepods-besteffort-pod76282f87_2b2c_4d09_a68d_b7b95e99f824.slice. Sep 12 17:34:10.796072 systemd[1]: Created slice kubepods-besteffort-pod37ecfd1d_6ac6_4ad3_9078_efcb50051c47.slice - libcontainer container kubepods-besteffort-pod37ecfd1d_6ac6_4ad3_9078_efcb50051c47.slice. Sep 12 17:34:10.807015 systemd[1]: Created slice kubepods-besteffort-pod71dc886e_53d6_4485_9dc3_23a94916f815.slice - libcontainer container kubepods-besteffort-pod71dc886e_53d6_4485_9dc3_23a94916f815.slice. Sep 12 17:34:10.812033 systemd[1]: Created slice kubepods-besteffort-podbbff376f_9ddd_4344_b41b_dd9ac22821d6.slice - libcontainer container kubepods-besteffort-podbbff376f_9ddd_4344_b41b_dd9ac22821d6.slice. Sep 12 17:34:10.883689 kubelet[2549]: I0912 17:34:10.880839 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjt4m\" (UniqueName: \"kubernetes.io/projected/71dc886e-53d6-4485-9dc3-23a94916f815-kube-api-access-pjt4m\") pod \"calico-kube-controllers-675d4f9887-6qpj9\" (UID: \"71dc886e-53d6-4485-9dc3-23a94916f815\") " pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" Sep 12 17:34:10.883689 kubelet[2549]: I0912 17:34:10.880916 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71dc886e-53d6-4485-9dc3-23a94916f815-tigera-ca-bundle\") pod \"calico-kube-controllers-675d4f9887-6qpj9\" (UID: \"71dc886e-53d6-4485-9dc3-23a94916f815\") " pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" Sep 12 17:34:10.883689 kubelet[2549]: I0912 17:34:10.880939 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbff376f-9ddd-4344-b41b-dd9ac22821d6-calico-apiserver-certs\") pod \"calico-apiserver-65c8b64669-dswbl\" (UID: \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\") " pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" Sep 12 17:34:10.883689 kubelet[2549]: I0912 17:34:10.881061 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37ecfd1d-6ac6-4ad3-9078-efcb50051c47-calico-apiserver-certs\") pod \"calico-apiserver-5dcc9d7764-p9sbl\" (UID: \"37ecfd1d-6ac6-4ad3-9078-efcb50051c47\") " pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" Sep 12 17:34:10.883689 kubelet[2549]: I0912 17:34:10.881081 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwlx\" (UniqueName: \"kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx\") pod \"calico-apiserver-65c8b64669-dswbl\" (UID: \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\") " pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" Sep 12 17:34:10.884829 kubelet[2549]: I0912 17:34:10.881114 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr42g\" (UniqueName: \"kubernetes.io/projected/37ecfd1d-6ac6-4ad3-9078-efcb50051c47-kube-api-access-pr42g\") pod \"calico-apiserver-5dcc9d7764-p9sbl\" (UID: \"37ecfd1d-6ac6-4ad3-9078-efcb50051c47\") " pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" Sep 12 17:34:11.083193 containerd[1498]: time="2025-09-12T17:34:11.083092556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gbp7h,Uid:6b2d4555-77d4-4589-97dc-8cc68e252176,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:11.113751 containerd[1498]: time="2025-09-12T17:34:11.113689140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675d4f9887-6qpj9,Uid:71dc886e-53d6-4485-9dc3-23a94916f815,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:11.326497 containerd[1498]: time="2025-09-12T17:34:11.326426893Z" level=error msg="Failed to destroy network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.327687 containerd[1498]: time="2025-09-12T17:34:11.327632367Z" level=error msg="Failed to destroy network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.338560 containerd[1498]: time="2025-09-12T17:34:11.338397502Z" level=error msg="encountered an error cleaning up failed sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.338560 containerd[1498]: time="2025-09-12T17:34:11.338463489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gbp7h,Uid:6b2d4555-77d4-4589-97dc-8cc68e252176,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.339983 containerd[1498]: time="2025-09-12T17:34:11.339626281Z" level=error msg="encountered an error cleaning up failed sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.339983 containerd[1498]: time="2025-09-12T17:34:11.339715863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675d4f9887-6qpj9,Uid:71dc886e-53d6-4485-9dc3-23a94916f815,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.348248 kubelet[2549]: E0912 17:34:11.347583 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.348248 kubelet[2549]: E0912 17:34:11.347702 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" Sep 12 17:34:11.348248 kubelet[2549]: E0912 17:34:11.347592 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.348248 kubelet[2549]: E0912 17:34:11.347736 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" Sep 12 17:34:11.349821 kubelet[2549]: E0912 17:34:11.347796 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:11.349821 kubelet[2549]: E0912 17:34:11.347821 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gbp7h" Sep 12 17:34:11.349821 kubelet[2549]: E0912 17:34:11.347854 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-675d4f9887-6qpj9_calico-system(71dc886e-53d6-4485-9dc3-23a94916f815)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-675d4f9887-6qpj9_calico-system(71dc886e-53d6-4485-9dc3-23a94916f815)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" podUID="71dc886e-53d6-4485-9dc3-23a94916f815" Sep 12 17:34:11.350045 kubelet[2549]: E0912 17:34:11.347869 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gbp7h_calico-system(6b2d4555-77d4-4589-97dc-8cc68e252176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gbp7h_calico-system(6b2d4555-77d4-4589-97dc-8cc68e252176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gbp7h" podUID="6b2d4555-77d4-4589-97dc-8cc68e252176" Sep 12 17:34:11.386351 containerd[1498]: time="2025-09-12T17:34:11.386076109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:34:11.387594 kubelet[2549]: I0912 17:34:11.387554 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:11.392612 kubelet[2549]: I0912 17:34:11.392570 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:11.401963 containerd[1498]: time="2025-09-12T17:34:11.401911888Z" level=info msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" Sep 12 17:34:11.403290 containerd[1498]: time="2025-09-12T17:34:11.402300844Z" level=info msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" Sep 12 17:34:11.404888 containerd[1498]: time="2025-09-12T17:34:11.404105299Z" level=info msg="Ensure that sandbox c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397 in task-service has been cleanup successfully" Sep 12 17:34:11.404888 containerd[1498]: time="2025-09-12T17:34:11.404366841Z" level=info msg="Ensure that sandbox 6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc in task-service has been cleanup successfully" Sep 12 17:34:11.448618 containerd[1498]: time="2025-09-12T17:34:11.448546049Z" level=error msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" failed" error="failed to destroy network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.448969 kubelet[2549]: E0912 17:34:11.448888 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:11.449051 kubelet[2549]: E0912 17:34:11.448993 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397"} Sep 12 17:34:11.449096 kubelet[2549]: E0912 17:34:11.449064 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b2d4555-77d4-4589-97dc-8cc68e252176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:11.449298 kubelet[2549]: E0912 17:34:11.449092 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b2d4555-77d4-4589-97dc-8cc68e252176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gbp7h" podUID="6b2d4555-77d4-4589-97dc-8cc68e252176" Sep 12 17:34:11.452962 containerd[1498]: time="2025-09-12T17:34:11.452927560Z" level=error msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" failed" error="failed to destroy network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:11.453270 kubelet[2549]: E0912 17:34:11.453219 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:11.453270 kubelet[2549]: E0912 17:34:11.453267 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc"} Sep 12 17:34:11.453375 kubelet[2549]: E0912 17:34:11.453299 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71dc886e-53d6-4485-9dc3-23a94916f815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:11.453375 kubelet[2549]: E0912 17:34:11.453322 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71dc886e-53d6-4485-9dc3-23a94916f815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" podUID="71dc886e-53d6-4485-9dc3-23a94916f815" Sep 12 17:34:11.882824 kubelet[2549]: E0912 17:34:11.882556 2549 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.882824 kubelet[2549]: E0912 17:34:11.882684 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b5f203f5-5176-4742-9fe4-7f35ee43dc3d-config-volume podName:b5f203f5-5176-4742-9fe4-7f35ee43dc3d nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.38263544 +0000 UTC m=+29.213474351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b5f203f5-5176-4742-9fe4-7f35ee43dc3d-config-volume") pod "coredns-7c65d6cfc9-hhxgk" (UID: "b5f203f5-5176-4742-9fe4-7f35ee43dc3d") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.884586 kubelet[2549]: E0912 17:34:11.883668 2549 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 12 17:34:11.884586 kubelet[2549]: E0912 17:34:11.883823 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair podName:76282f87-2b2c-4d09-a68d-b7b95e99f824 nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.383796539 +0000 UTC m=+29.214635449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair") pod "whisker-5c8965db89-kpnzw" (UID: "76282f87-2b2c-4d09-a68d-b7b95e99f824") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:34:11.885256 kubelet[2549]: E0912 17:34:11.885198 2549 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.885406 kubelet[2549]: E0912 17:34:11.885298 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2038863b-3df2-4d79-ad86-96d22113a91c-config-volume podName:2038863b-3df2-4d79-ad86-96d22113a91c nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.385268995 +0000 UTC m=+29.216107916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2038863b-3df2-4d79-ad86-96d22113a91c-config-volume") pod "coredns-7c65d6cfc9-n262w" (UID: "2038863b-3df2-4d79-ad86-96d22113a91c") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.901891 kubelet[2549]: E0912 17:34:11.901743 2549 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.901891 kubelet[2549]: E0912 17:34:11.901828 2549 projected.go:194] Error preparing data for projected volume kube-api-access-kxt25 for pod calico-apiserver/calico-apiserver-65c8b64669-gr9jn: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.901891 kubelet[2549]: E0912 17:34:11.901898 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25 podName:da5d1853-6f02-4cda-8e7a-51c5e86f7848 nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.401878009 +0000 UTC m=+29.232716920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kxt25" (UniqueName: "kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25") pod "calico-apiserver-65c8b64669-gr9jn" (UID: "da5d1853-6f02-4cda-8e7a-51c5e86f7848") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.991864 kubelet[2549]: E0912 17:34:11.990896 2549 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.991864 kubelet[2549]: E0912 17:34:11.990963 2549 projected.go:194] Error preparing data for projected volume kube-api-access-pr42g for pod calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.991864 kubelet[2549]: E0912 17:34:11.991070 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37ecfd1d-6ac6-4ad3-9078-efcb50051c47-kube-api-access-pr42g podName:37ecfd1d-6ac6-4ad3-9078-efcb50051c47 nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.491031733 +0000 UTC m=+29.321870654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pr42g" (UniqueName: "kubernetes.io/projected/37ecfd1d-6ac6-4ad3-9078-efcb50051c47-kube-api-access-pr42g") pod "calico-apiserver-5dcc9d7764-p9sbl" (UID: "37ecfd1d-6ac6-4ad3-9078-efcb50051c47") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.995360 kubelet[2549]: E0912 17:34:11.995277 2549 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.995360 kubelet[2549]: E0912 17:34:11.995329 2549 projected.go:194] Error preparing data for projected volume kube-api-access-rhwlx for pod calico-apiserver/calico-apiserver-65c8b64669-dswbl: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:11.995575 kubelet[2549]: E0912 17:34:11.995404 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx podName:bbff376f-9ddd-4344-b41b-dd9ac22821d6 nodeName:}" failed. No retries permitted until 2025-09-12 17:34:12.495383487 +0000 UTC m=+29.326222408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rhwlx" (UniqueName: "kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx") pod "calico-apiserver-65c8b64669-dswbl" (UID: "bbff376f-9ddd-4344-b41b-dd9ac22821d6") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:12.007277 kubelet[2549]: I0912 17:34:12.006834 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:12.038005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc-shm.mount: Deactivated successfully. Sep 12 17:34:12.038167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397-shm.mount: Deactivated successfully. Sep 12 17:34:12.273685 systemd[1]: Created slice kubepods-besteffort-pod262fff4b_d78b_430c_976d_43c3eb6a4adc.slice - libcontainer container kubepods-besteffort-pod262fff4b_d78b_430c_976d_43c3eb6a4adc.slice. Sep 12 17:34:12.277415 containerd[1498]: time="2025-09-12T17:34:12.277327801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4tcw,Uid:262fff4b-d78b-430c-976d-43c3eb6a4adc,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:12.371266 containerd[1498]: time="2025-09-12T17:34:12.371162275Z" level=error msg="Failed to destroy network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.373370 containerd[1498]: time="2025-09-12T17:34:12.371609222Z" level=error msg="encountered an error cleaning up failed sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.373370 containerd[1498]: time="2025-09-12T17:34:12.371674157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4tcw,Uid:262fff4b-d78b-430c-976d-43c3eb6a4adc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.373541 kubelet[2549]: E0912 17:34:12.373387 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.373541 kubelet[2549]: E0912 17:34:12.373446 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:12.373541 kubelet[2549]: E0912 17:34:12.373466 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4tcw" Sep 12 17:34:12.374333 kubelet[2549]: E0912 17:34:12.373508 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j4tcw_calico-system(262fff4b-d78b-430c-976d-43c3eb6a4adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j4tcw_calico-system(262fff4b-d78b-430c-976d-43c3eb6a4adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:12.375610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80-shm.mount: Deactivated successfully. Sep 12 17:34:12.403372 kubelet[2549]: I0912 17:34:12.403308 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:12.406896 containerd[1498]: time="2025-09-12T17:34:12.406754701Z" level=info msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" Sep 12 17:34:12.407117 containerd[1498]: time="2025-09-12T17:34:12.407048203Z" level=info msg="Ensure that sandbox c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80 in task-service has been cleanup successfully" Sep 12 17:34:12.447739 containerd[1498]: time="2025-09-12T17:34:12.447681889Z" level=error msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" failed" error="failed to destroy network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.448340 kubelet[2549]: E0912 17:34:12.448137 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:12.448340 kubelet[2549]: E0912 17:34:12.448224 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80"} Sep 12 17:34:12.448340 kubelet[2549]: E0912 17:34:12.448276 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"262fff4b-d78b-430c-976d-43c3eb6a4adc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:12.448340 kubelet[2549]: E0912 17:34:12.448303 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"262fff4b-d78b-430c-976d-43c3eb6a4adc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4tcw" podUID="262fff4b-d78b-430c-976d-43c3eb6a4adc" Sep 12 17:34:12.564416 containerd[1498]: time="2025-09-12T17:34:12.564255058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n262w,Uid:2038863b-3df2-4d79-ad86-96d22113a91c,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:12.572462 containerd[1498]: time="2025-09-12T17:34:12.572396838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-gr9jn,Uid:da5d1853-6f02-4cda-8e7a-51c5e86f7848,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:12.590947 containerd[1498]: time="2025-09-12T17:34:12.590905959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hhxgk,Uid:b5f203f5-5176-4742-9fe4-7f35ee43dc3d,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:12.596085 containerd[1498]: time="2025-09-12T17:34:12.595316030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8965db89-kpnzw,Uid:76282f87-2b2c-4d09-a68d-b7b95e99f824,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:12.605529 containerd[1498]: time="2025-09-12T17:34:12.605228987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-p9sbl,Uid:37ecfd1d-6ac6-4ad3-9078-efcb50051c47,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:12.616732 containerd[1498]: time="2025-09-12T17:34:12.616666792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-dswbl,Uid:bbff376f-9ddd-4344-b41b-dd9ac22821d6,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:12.676789 containerd[1498]: time="2025-09-12T17:34:12.676724564Z" level=error msg="Failed to destroy network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.677163 containerd[1498]: time="2025-09-12T17:34:12.677079916Z" level=error msg="encountered an error cleaning up failed sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.678184 containerd[1498]: time="2025-09-12T17:34:12.677139761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n262w,Uid:2038863b-3df2-4d79-ad86-96d22113a91c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.680298 kubelet[2549]: E0912 17:34:12.679447 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.680298 kubelet[2549]: E0912 17:34:12.679499 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n262w" Sep 12 17:34:12.680298 kubelet[2549]: E0912 17:34:12.679521 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n262w" Sep 12 17:34:12.680412 kubelet[2549]: E0912 17:34:12.679558 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-n262w_kube-system(2038863b-3df2-4d79-ad86-96d22113a91c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-n262w_kube-system(2038863b-3df2-4d79-ad86-96d22113a91c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n262w" podUID="2038863b-3df2-4d79-ad86-96d22113a91c" Sep 12 17:34:12.733700 containerd[1498]: time="2025-09-12T17:34:12.733646681Z" level=error msg="Failed to destroy network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.734717 containerd[1498]: time="2025-09-12T17:34:12.733961344Z" level=error msg="encountered an error cleaning up failed sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.734717 containerd[1498]: time="2025-09-12T17:34:12.734012943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-gr9jn,Uid:da5d1853-6f02-4cda-8e7a-51c5e86f7848,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.734911 kubelet[2549]: E0912 17:34:12.734229 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.734911 kubelet[2549]: E0912 17:34:12.734284 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" Sep 12 17:34:12.734911 kubelet[2549]: E0912 17:34:12.734300 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" Sep 12 17:34:12.734984 kubelet[2549]: E0912 17:34:12.734355 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65c8b64669-gr9jn_calico-apiserver(da5d1853-6f02-4cda-8e7a-51c5e86f7848)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65c8b64669-gr9jn_calico-apiserver(da5d1853-6f02-4cda-8e7a-51c5e86f7848)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" podUID="da5d1853-6f02-4cda-8e7a-51c5e86f7848" Sep 12 17:34:12.754602 containerd[1498]: time="2025-09-12T17:34:12.754558772Z" level=error msg="Failed to destroy network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.754995 containerd[1498]: time="2025-09-12T17:34:12.754968347Z" level=error msg="encountered an error cleaning up failed sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.755137 containerd[1498]: time="2025-09-12T17:34:12.755118134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-dswbl,Uid:bbff376f-9ddd-4344-b41b-dd9ac22821d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.755484 kubelet[2549]: E0912 17:34:12.755448 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.755575 kubelet[2549]: E0912 17:34:12.755503 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" Sep 12 17:34:12.755575 kubelet[2549]: E0912 17:34:12.755521 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" Sep 12 17:34:12.755732 kubelet[2549]: E0912 17:34:12.755563 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65c8b64669-dswbl_calico-apiserver(bbff376f-9ddd-4344-b41b-dd9ac22821d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65c8b64669-dswbl_calico-apiserver(bbff376f-9ddd-4344-b41b-dd9ac22821d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" podUID="bbff376f-9ddd-4344-b41b-dd9ac22821d6" Sep 12 17:34:12.772178 containerd[1498]: time="2025-09-12T17:34:12.772109953Z" level=error msg="Failed to destroy network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.773432 containerd[1498]: time="2025-09-12T17:34:12.772590585Z" level=error msg="encountered an error cleaning up failed sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.773432 containerd[1498]: time="2025-09-12T17:34:12.772645571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hhxgk,Uid:b5f203f5-5176-4742-9fe4-7f35ee43dc3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.773690 kubelet[2549]: E0912 17:34:12.772822 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.773690 kubelet[2549]: E0912 17:34:12.772862 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hhxgk" Sep 12 17:34:12.773690 kubelet[2549]: E0912 17:34:12.772878 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hhxgk" Sep 12 17:34:12.773904 kubelet[2549]: E0912 17:34:12.772926 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hhxgk_kube-system(b5f203f5-5176-4742-9fe4-7f35ee43dc3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hhxgk_kube-system(b5f203f5-5176-4742-9fe4-7f35ee43dc3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hhxgk" podUID="b5f203f5-5176-4742-9fe4-7f35ee43dc3d" Sep 12 17:34:12.778101 containerd[1498]: time="2025-09-12T17:34:12.778034741Z" level=error msg="Failed to destroy network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.778432 containerd[1498]: time="2025-09-12T17:34:12.778387748Z" level=error msg="encountered an error cleaning up failed sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.778521 containerd[1498]: time="2025-09-12T17:34:12.778463202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8965db89-kpnzw,Uid:76282f87-2b2c-4d09-a68d-b7b95e99f824,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.778842 kubelet[2549]: E0912 17:34:12.778734 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.778842 kubelet[2549]: E0912 17:34:12.778768 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8965db89-kpnzw" Sep 12 17:34:12.778842 kubelet[2549]: E0912 17:34:12.778819 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8965db89-kpnzw" Sep 12 17:34:12.779031 kubelet[2549]: E0912 17:34:12.778862 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c8965db89-kpnzw_calico-system(76282f87-2b2c-4d09-a68d-b7b95e99f824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c8965db89-kpnzw_calico-system(76282f87-2b2c-4d09-a68d-b7b95e99f824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c8965db89-kpnzw" podUID="76282f87-2b2c-4d09-a68d-b7b95e99f824" Sep 12 17:34:12.792353 containerd[1498]: time="2025-09-12T17:34:12.792294608Z" level=error msg="Failed to destroy network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.792673 containerd[1498]: time="2025-09-12T17:34:12.792632246Z" level=error msg="encountered an error cleaning up failed sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.792732 containerd[1498]: time="2025-09-12T17:34:12.792691760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-p9sbl,Uid:37ecfd1d-6ac6-4ad3-9078-efcb50051c47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.792959 kubelet[2549]: E0912 17:34:12.792912 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:12.793083 kubelet[2549]: E0912 17:34:12.792955 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" Sep 12 17:34:12.793083 kubelet[2549]: E0912 17:34:12.792992 2549 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" Sep 12 17:34:12.793083 kubelet[2549]: E0912 17:34:12.793036 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dcc9d7764-p9sbl_calico-apiserver(37ecfd1d-6ac6-4ad3-9078-efcb50051c47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dcc9d7764-p9sbl_calico-apiserver(37ecfd1d-6ac6-4ad3-9078-efcb50051c47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" podUID="37ecfd1d-6ac6-4ad3-9078-efcb50051c47" Sep 12 17:34:13.405953 kubelet[2549]: I0912 17:34:13.405907 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:13.407171 containerd[1498]: time="2025-09-12T17:34:13.406797371Z" level=info msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" Sep 12 17:34:13.407171 containerd[1498]: time="2025-09-12T17:34:13.407022743Z" level=info msg="Ensure that sandbox aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9 in task-service has been cleanup successfully" Sep 12 17:34:13.408510 kubelet[2549]: I0912 17:34:13.408446 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:13.409973 kubelet[2549]: I0912 17:34:13.409934 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:13.410168 containerd[1498]: time="2025-09-12T17:34:13.410084082Z" level=info msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" Sep 12 17:34:13.410407 containerd[1498]: time="2025-09-12T17:34:13.410367956Z" level=info msg="Ensure that sandbox f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6 in task-service has been cleanup successfully" Sep 12 17:34:13.411453 containerd[1498]: time="2025-09-12T17:34:13.411399003Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:13.411757 containerd[1498]: time="2025-09-12T17:34:13.411525908Z" level=info msg="Ensure that sandbox de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe in task-service has been cleanup successfully" Sep 12 17:34:13.414485 kubelet[2549]: I0912 17:34:13.414431 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:13.415533 containerd[1498]: time="2025-09-12T17:34:13.415082386Z" level=info msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" Sep 12 17:34:13.417334 containerd[1498]: time="2025-09-12T17:34:13.416944548Z" level=info msg="Ensure that sandbox cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20 in task-service has been cleanup successfully" Sep 12 17:34:13.422457 kubelet[2549]: I0912 17:34:13.422434 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:13.424583 containerd[1498]: time="2025-09-12T17:34:13.424068226Z" level=info msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" Sep 12 17:34:13.424785 containerd[1498]: time="2025-09-12T17:34:13.424704166Z" level=info msg="Ensure that sandbox 33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de in task-service has been cleanup successfully" Sep 12 17:34:13.428938 kubelet[2549]: I0912 17:34:13.428898 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:13.429851 containerd[1498]: time="2025-09-12T17:34:13.429721397Z" level=info msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" Sep 12 17:34:13.430090 containerd[1498]: time="2025-09-12T17:34:13.430011153Z" level=info msg="Ensure that sandbox fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261 in task-service has been cleanup successfully" Sep 12 17:34:13.482115 containerd[1498]: time="2025-09-12T17:34:13.481986163Z" level=error msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" failed" error="failed to destroy network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.483340 kubelet[2549]: E0912 17:34:13.483285 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:13.483397 kubelet[2549]: E0912 17:34:13.483353 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20"} Sep 12 17:34:13.483397 kubelet[2549]: E0912 17:34:13.483391 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2038863b-3df2-4d79-ad86-96d22113a91c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.483486 kubelet[2549]: E0912 17:34:13.483411 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2038863b-3df2-4d79-ad86-96d22113a91c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n262w" podUID="2038863b-3df2-4d79-ad86-96d22113a91c" Sep 12 17:34:13.488560 containerd[1498]: time="2025-09-12T17:34:13.487672236Z" level=error msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" failed" error="failed to destroy network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.488794 kubelet[2549]: E0912 17:34:13.488740 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:13.488919 kubelet[2549]: E0912 17:34:13.488903 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de"} Sep 12 17:34:13.489029 kubelet[2549]: E0912 17:34:13.489015 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37ecfd1d-6ac6-4ad3-9078-efcb50051c47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.489200 kubelet[2549]: E0912 17:34:13.489135 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37ecfd1d-6ac6-4ad3-9078-efcb50051c47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" podUID="37ecfd1d-6ac6-4ad3-9078-efcb50051c47" Sep 12 17:34:13.495528 containerd[1498]: time="2025-09-12T17:34:13.495217544Z" level=error msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" failed" error="failed to destroy network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.495973 kubelet[2549]: E0912 17:34:13.495821 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:13.495973 kubelet[2549]: E0912 17:34:13.495880 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9"} Sep 12 17:34:13.495973 kubelet[2549]: E0912 17:34:13.495909 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76282f87-2b2c-4d09-a68d-b7b95e99f824\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.495973 kubelet[2549]: E0912 17:34:13.495927 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76282f87-2b2c-4d09-a68d-b7b95e99f824\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c8965db89-kpnzw" podUID="76282f87-2b2c-4d09-a68d-b7b95e99f824" Sep 12 17:34:13.508311 containerd[1498]: time="2025-09-12T17:34:13.508111067Z" level=error msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" failed" error="failed to destroy network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.509126 kubelet[2549]: E0912 17:34:13.508371 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:13.509126 kubelet[2549]: E0912 17:34:13.508411 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261"} Sep 12 17:34:13.509126 kubelet[2549]: E0912 17:34:13.508439 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.509126 kubelet[2549]: E0912 17:34:13.508459 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" podUID="bbff376f-9ddd-4344-b41b-dd9ac22821d6" Sep 12 17:34:13.511008 containerd[1498]: time="2025-09-12T17:34:13.510917568Z" level=error msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" failed" error="failed to destroy network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.511088 kubelet[2549]: E0912 17:34:13.511063 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:13.511125 kubelet[2549]: E0912 17:34:13.511095 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe"} Sep 12 17:34:13.511125 kubelet[2549]: E0912 17:34:13.511117 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.512001 kubelet[2549]: E0912 17:34:13.511136 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" podUID="da5d1853-6f02-4cda-8e7a-51c5e86f7848" Sep 12 17:34:13.512773 containerd[1498]: time="2025-09-12T17:34:13.512731877Z" level=error msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" failed" error="failed to destroy network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:34:13.512907 kubelet[2549]: E0912 17:34:13.512879 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:13.512945 kubelet[2549]: E0912 17:34:13.512906 2549 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6"} Sep 12 17:34:13.512945 kubelet[2549]: E0912 17:34:13.512927 2549 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f203f5-5176-4742-9fe4-7f35ee43dc3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:34:13.513002 kubelet[2549]: E0912 17:34:13.512946 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f203f5-5176-4742-9fe4-7f35ee43dc3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hhxgk" podUID="b5f203f5-5176-4742-9fe4-7f35ee43dc3d" Sep 12 17:34:18.669897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285011666.mount: Deactivated successfully. Sep 12 17:34:18.766415 containerd[1498]: time="2025-09-12T17:34:18.731938977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:34:18.768692 containerd[1498]: time="2025-09-12T17:34:18.768626134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:18.808519 containerd[1498]: time="2025-09-12T17:34:18.808451965Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:18.809342 containerd[1498]: time="2025-09-12T17:34:18.809268146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:18.811836 containerd[1498]: time="2025-09-12T17:34:18.811789530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.423594747s" Sep 12 17:34:18.811836 containerd[1498]: time="2025-09-12T17:34:18.811833924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:34:18.859262 containerd[1498]: time="2025-09-12T17:34:18.859180905Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:34:18.919463 containerd[1498]: time="2025-09-12T17:34:18.919391173Z" level=info msg="CreateContainer within sandbox \"a8f6a73108e2decf9d937f29bbfcb855968cbba12d823346199930459b7978d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d6c55c74c42a5ec726bbf2fc03915f126d4786663521024fd2e7bd2d53d03d80\"" Sep 12 17:34:18.924456 containerd[1498]: time="2025-09-12T17:34:18.923971155Z" level=info msg="StartContainer for \"d6c55c74c42a5ec726bbf2fc03915f126d4786663521024fd2e7bd2d53d03d80\"" Sep 12 17:34:19.033299 systemd[1]: Started cri-containerd-d6c55c74c42a5ec726bbf2fc03915f126d4786663521024fd2e7bd2d53d03d80.scope - libcontainer container d6c55c74c42a5ec726bbf2fc03915f126d4786663521024fd2e7bd2d53d03d80. Sep 12 17:34:19.083710 containerd[1498]: time="2025-09-12T17:34:19.083655029Z" level=info msg="StartContainer for \"d6c55c74c42a5ec726bbf2fc03915f126d4786663521024fd2e7bd2d53d03d80\" returns successfully" Sep 12 17:34:19.183579 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:34:19.185333 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:34:19.392575 containerd[1498]: time="2025-09-12T17:34:19.392514920Z" level=info msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.485 [INFO][3842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.487 [INFO][3842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" iface="eth0" netns="/var/run/netns/cni-583d71d6-02dd-cf48-cff2-e04be0ee5752" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.488 [INFO][3842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" iface="eth0" netns="/var/run/netns/cni-583d71d6-02dd-cf48-cff2-e04be0ee5752" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.488 [INFO][3842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" iface="eth0" netns="/var/run/netns/cni-583d71d6-02dd-cf48-cff2-e04be0ee5752" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.488 [INFO][3842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.488 [INFO][3842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.691 [INFO][3849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.693 [INFO][3849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.693 [INFO][3849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.705 [WARNING][3849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.705 [INFO][3849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.707 [INFO][3849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:19.710529 containerd[1498]: 2025-09-12 17:34:19.708 [INFO][3842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:19.710929 containerd[1498]: time="2025-09-12T17:34:19.710617176Z" level=info msg="TearDown network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" successfully" Sep 12 17:34:19.710929 containerd[1498]: time="2025-09-12T17:34:19.710641002Z" level=info msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" returns successfully" Sep 12 17:34:19.714064 systemd[1]: run-netns-cni\x2d583d71d6\x2d02dd\x2dcf48\x2dcff2\x2de04be0ee5752.mount: Deactivated successfully. Sep 12 17:34:19.768472 kubelet[2549]: I0912 17:34:19.768407 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-ca-bundle\") pod \"76282f87-2b2c-4d09-a68d-b7b95e99f824\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " Sep 12 17:34:19.768876 kubelet[2549]: I0912 17:34:19.768518 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnd6\" (UniqueName: \"kubernetes.io/projected/76282f87-2b2c-4d09-a68d-b7b95e99f824-kube-api-access-fnnd6\") pod \"76282f87-2b2c-4d09-a68d-b7b95e99f824\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " Sep 12 17:34:19.768876 kubelet[2549]: I0912 17:34:19.768548 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair\") pod \"76282f87-2b2c-4d09-a68d-b7b95e99f824\" (UID: \"76282f87-2b2c-4d09-a68d-b7b95e99f824\") " Sep 12 17:34:19.780510 kubelet[2549]: I0912 17:34:19.779292 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "76282f87-2b2c-4d09-a68d-b7b95e99f824" (UID: "76282f87-2b2c-4d09-a68d-b7b95e99f824"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:34:19.783020 kubelet[2549]: I0912 17:34:19.782977 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "76282f87-2b2c-4d09-a68d-b7b95e99f824" (UID: "76282f87-2b2c-4d09-a68d-b7b95e99f824"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:34:19.787259 kubelet[2549]: I0912 17:34:19.787203 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76282f87-2b2c-4d09-a68d-b7b95e99f824-kube-api-access-fnnd6" (OuterVolumeSpecName: "kube-api-access-fnnd6") pod "76282f87-2b2c-4d09-a68d-b7b95e99f824" (UID: "76282f87-2b2c-4d09-a68d-b7b95e99f824"). InnerVolumeSpecName "kube-api-access-fnnd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:34:19.787450 systemd[1]: var-lib-kubelet-pods-76282f87\x2d2b2c\x2d4d09\x2da68d\x2db7b95e99f824-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:34:19.791446 systemd[1]: var-lib-kubelet-pods-76282f87\x2d2b2c\x2d4d09\x2da68d\x2db7b95e99f824-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnnd6.mount: Deactivated successfully. Sep 12 17:34:19.869695 kubelet[2549]: I0912 17:34:19.869633 2549 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-backend-key-pair\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:19.869695 kubelet[2549]: I0912 17:34:19.869675 2549 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76282f87-2b2c-4d09-a68d-b7b95e99f824-whisker-ca-bundle\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:19.869695 kubelet[2549]: I0912 17:34:19.869687 2549 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnnd6\" (UniqueName: \"kubernetes.io/projected/76282f87-2b2c-4d09-a68d-b7b95e99f824-kube-api-access-fnnd6\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:20.461923 kubelet[2549]: I0912 17:34:20.461883 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:20.462630 systemd[1]: Removed slice kubepods-besteffort-pod76282f87_2b2c_4d09_a68d_b7b95e99f824.slice - libcontainer container kubepods-besteffort-pod76282f87_2b2c_4d09_a68d_b7b95e99f824.slice. Sep 12 17:34:20.499954 kubelet[2549]: I0912 17:34:20.491679 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9l9b2" podStartSLOduration=3.003738639 podStartE2EDuration="19.4837096s" podCreationTimestamp="2025-09-12 17:34:01 +0000 UTC" firstStartedPulling="2025-09-12 17:34:02.332749178 +0000 UTC m=+19.163588060" lastFinishedPulling="2025-09-12 17:34:18.81272014 +0000 UTC m=+35.643559021" observedRunningTime="2025-09-12 17:34:19.587737943 +0000 UTC m=+36.418576824" watchObservedRunningTime="2025-09-12 17:34:20.4837096 +0000 UTC m=+37.314548480" Sep 12 17:34:20.542887 systemd[1]: Created slice kubepods-besteffort-pod7275c405_eddb_438f_b8c5_24df83397f76.slice - libcontainer container kubepods-besteffort-pod7275c405_eddb_438f_b8c5_24df83397f76.slice. Sep 12 17:34:20.576045 kubelet[2549]: I0912 17:34:20.575987 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7275c405-eddb-438f-b8c5-24df83397f76-whisker-ca-bundle\") pod \"whisker-79c876859b-zzxpj\" (UID: \"7275c405-eddb-438f-b8c5-24df83397f76\") " pod="calico-system/whisker-79c876859b-zzxpj" Sep 12 17:34:20.576045 kubelet[2549]: I0912 17:34:20.576073 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66l6t\" (UniqueName: \"kubernetes.io/projected/7275c405-eddb-438f-b8c5-24df83397f76-kube-api-access-66l6t\") pod \"whisker-79c876859b-zzxpj\" (UID: \"7275c405-eddb-438f-b8c5-24df83397f76\") " pod="calico-system/whisker-79c876859b-zzxpj" Sep 12 17:34:20.576045 kubelet[2549]: I0912 17:34:20.576097 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7275c405-eddb-438f-b8c5-24df83397f76-whisker-backend-key-pair\") pod \"whisker-79c876859b-zzxpj\" (UID: \"7275c405-eddb-438f-b8c5-24df83397f76\") " pod="calico-system/whisker-79c876859b-zzxpj" Sep 12 17:34:20.847243 containerd[1498]: time="2025-09-12T17:34:20.847075554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c876859b-zzxpj,Uid:7275c405-eddb-438f-b8c5-24df83397f76,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:21.066688 systemd-networkd[1386]: cali9fbeb621553: Link UP Sep 12 17:34:21.071218 systemd-networkd[1386]: cali9fbeb621553: Gained carrier Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.943 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.957 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0 whisker-79c876859b- calico-system 7275c405-eddb-438f-b8c5-24df83397f76 901 0 2025-09-12 17:34:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79c876859b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f whisker-79c876859b-zzxpj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9fbeb621553 [] [] }} ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.957 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.998 [INFO][3977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" HandleID="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.999 [INFO][3977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" HandleID="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"whisker-79c876859b-zzxpj", "timestamp":"2025-09-12 17:34:20.998366681 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.999 [INFO][3977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.999 [INFO][3977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:20.999 [INFO][3977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.008 [INFO][3977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.016 [INFO][3977] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.022 [INFO][3977] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.026 [INFO][3977] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.028 [INFO][3977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.028 [INFO][3977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.029 [INFO][3977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.035 [INFO][3977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.046 [INFO][3977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.65/26] block=192.168.109.64/26 handle="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.046 [INFO][3977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.65/26] handle="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.046 [INFO][3977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:21.087563 containerd[1498]: 2025-09-12 17:34:21.046 [INFO][3977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.65/26] IPv6=[] ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" HandleID="k8s-pod-network.57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.050 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0", GenerateName:"whisker-79c876859b-", Namespace:"calico-system", SelfLink:"", UID:"7275c405-eddb-438f-b8c5-24df83397f76", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c876859b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"whisker-79c876859b-zzxpj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9fbeb621553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.050 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.65/32] ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.050 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fbeb621553 ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.061 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.068 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0", GenerateName:"whisker-79c876859b-", Namespace:"calico-system", SelfLink:"", UID:"7275c405-eddb-438f-b8c5-24df83397f76", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c876859b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd", Pod:"whisker-79c876859b-zzxpj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9fbeb621553", MAC:"4a:fa:58:40:41:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:21.088265 containerd[1498]: 2025-09-12 17:34:21.080 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd" Namespace="calico-system" Pod="whisker-79c876859b-zzxpj" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--79c876859b--zzxpj-eth0" Sep 12 17:34:21.090174 kernel: bpftool[4014]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:34:21.130062 containerd[1498]: time="2025-09-12T17:34:21.129626112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:21.130062 containerd[1498]: time="2025-09-12T17:34:21.129997522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:21.130704 containerd[1498]: time="2025-09-12T17:34:21.130225507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:21.130807 containerd[1498]: time="2025-09-12T17:34:21.130763806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:21.152480 systemd[1]: Started cri-containerd-57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd.scope - libcontainer container 57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd. Sep 12 17:34:21.212565 containerd[1498]: time="2025-09-12T17:34:21.212518877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c876859b-zzxpj,Uid:7275c405-eddb-438f-b8c5-24df83397f76,Namespace:calico-system,Attempt:0,} returns sandbox id \"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd\"" Sep 12 17:34:21.235847 containerd[1498]: time="2025-09-12T17:34:21.235652999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:34:21.280462 kubelet[2549]: I0912 17:34:21.280411 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76282f87-2b2c-4d09-a68d-b7b95e99f824" path="/var/lib/kubelet/pods/76282f87-2b2c-4d09-a68d-b7b95e99f824/volumes" Sep 12 17:34:21.355456 systemd-networkd[1386]: vxlan.calico: Link UP Sep 12 17:34:21.355468 systemd-networkd[1386]: vxlan.calico: Gained carrier Sep 12 17:34:22.587524 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Sep 12 17:34:22.907423 systemd-networkd[1386]: cali9fbeb621553: Gained IPv6LL Sep 12 17:34:23.172702 containerd[1498]: time="2025-09-12T17:34:23.172550300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.175118 containerd[1498]: time="2025-09-12T17:34:23.175071265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:34:23.175994 containerd[1498]: time="2025-09-12T17:34:23.175922951Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.178260 containerd[1498]: time="2025-09-12T17:34:23.178196644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:23.178815 containerd[1498]: time="2025-09-12T17:34:23.178672683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.942977795s" Sep 12 17:34:23.178815 containerd[1498]: time="2025-09-12T17:34:23.178701698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:34:23.181940 containerd[1498]: time="2025-09-12T17:34:23.181903745Z" level=info msg="CreateContainer within sandbox \"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:34:23.223091 containerd[1498]: time="2025-09-12T17:34:23.223021426Z" level=info msg="CreateContainer within sandbox \"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f0384929a3cbddebe11fb80c5208ec681fbb1f916e8a098dd0c7521a9b207c17\"" Sep 12 17:34:23.225190 containerd[1498]: time="2025-09-12T17:34:23.224385172Z" level=info msg="StartContainer for \"f0384929a3cbddebe11fb80c5208ec681fbb1f916e8a098dd0c7521a9b207c17\"" Sep 12 17:34:23.259340 systemd[1]: Started cri-containerd-f0384929a3cbddebe11fb80c5208ec681fbb1f916e8a098dd0c7521a9b207c17.scope - libcontainer container f0384929a3cbddebe11fb80c5208ec681fbb1f916e8a098dd0c7521a9b207c17. Sep 12 17:34:23.269312 containerd[1498]: time="2025-09-12T17:34:23.268783219Z" level=info msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" Sep 12 17:34:23.328325 containerd[1498]: time="2025-09-12T17:34:23.328247082Z" level=info msg="StartContainer for \"f0384929a3cbddebe11fb80c5208ec681fbb1f916e8a098dd0c7521a9b207c17\" returns successfully" Sep 12 17:34:23.335062 containerd[1498]: time="2025-09-12T17:34:23.334785050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.339 [INFO][4173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.340 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" iface="eth0" netns="/var/run/netns/cni-297ae5d2-3ff9-2cba-8d77-bde4b7a08e27" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.342 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" iface="eth0" netns="/var/run/netns/cni-297ae5d2-3ff9-2cba-8d77-bde4b7a08e27" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.345 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" iface="eth0" netns="/var/run/netns/cni-297ae5d2-3ff9-2cba-8d77-bde4b7a08e27" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.345 [INFO][4173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.345 [INFO][4173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.375 [INFO][4188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.375 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.376 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.380 [WARNING][4188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.380 [INFO][4188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.382 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:23.388170 containerd[1498]: 2025-09-12 17:34:23.384 [INFO][4173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:23.388170 containerd[1498]: time="2025-09-12T17:34:23.386080702Z" level=info msg="TearDown network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" successfully" Sep 12 17:34:23.388170 containerd[1498]: time="2025-09-12T17:34:23.386106401Z" level=info msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" returns successfully" Sep 12 17:34:23.390384 containerd[1498]: time="2025-09-12T17:34:23.389197946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gbp7h,Uid:6b2d4555-77d4-4589-97dc-8cc68e252176,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:23.388389 systemd[1]: run-netns-cni\x2d297ae5d2\x2d3ff9\x2d2cba\x2d8d77\x2dbde4b7a08e27.mount: Deactivated successfully. Sep 12 17:34:23.504346 systemd-networkd[1386]: cali0db79aea7fc: Link UP Sep 12 17:34:23.504884 systemd-networkd[1386]: cali0db79aea7fc: Gained carrier Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.440 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0 goldmane-7988f88666- calico-system 6b2d4555-77d4-4589-97dc-8cc68e252176 915 0 2025-09-12 17:34:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f goldmane-7988f88666-gbp7h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0db79aea7fc [] [] }} ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.440 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.464 [INFO][4212] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" HandleID="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.464 [INFO][4212] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" HandleID="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"goldmane-7988f88666-gbp7h", "timestamp":"2025-09-12 17:34:23.464597525 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.465 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.465 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.465 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.471 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.477 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.483 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.485 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.486 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.487 [INFO][4212] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.488 [INFO][4212] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.491 [INFO][4212] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.496 [INFO][4212] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.66/26] block=192.168.109.64/26 handle="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.496 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.66/26] handle="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.496 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:23.522537 containerd[1498]: 2025-09-12 17:34:23.496 [INFO][4212] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.66/26] IPv6=[] ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" HandleID="k8s-pod-network.0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.499 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6b2d4555-77d4-4589-97dc-8cc68e252176", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"goldmane-7988f88666-gbp7h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0db79aea7fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.499 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.66/32] ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.499 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0db79aea7fc ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.505 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.505 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6b2d4555-77d4-4589-97dc-8cc68e252176", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff", Pod:"goldmane-7988f88666-gbp7h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0db79aea7fc", MAC:"9a:43:9c:c0:46:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:23.523269 containerd[1498]: 2025-09-12 17:34:23.518 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff" Namespace="calico-system" Pod="goldmane-7988f88666-gbp7h" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:23.541674 containerd[1498]: time="2025-09-12T17:34:23.541458847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:23.541674 containerd[1498]: time="2025-09-12T17:34:23.541519072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:23.541674 containerd[1498]: time="2025-09-12T17:34:23.541541845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:23.542137 containerd[1498]: time="2025-09-12T17:34:23.541619402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:23.563314 systemd[1]: Started cri-containerd-0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff.scope - libcontainer container 0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff. Sep 12 17:34:23.599517 containerd[1498]: time="2025-09-12T17:34:23.599394479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gbp7h,Uid:6b2d4555-77d4-4589-97dc-8cc68e252176,Namespace:calico-system,Attempt:1,} returns sandbox id \"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff\"" Sep 12 17:34:24.266518 containerd[1498]: time="2025-09-12T17:34:24.266389158Z" level=info msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.312 [INFO][4277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.312 [INFO][4277] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" iface="eth0" netns="/var/run/netns/cni-8f4ea353-062a-8986-0d19-bd606b442493" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.313 [INFO][4277] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" iface="eth0" netns="/var/run/netns/cni-8f4ea353-062a-8986-0d19-bd606b442493" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.313 [INFO][4277] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" iface="eth0" netns="/var/run/netns/cni-8f4ea353-062a-8986-0d19-bd606b442493" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.313 [INFO][4277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.313 [INFO][4277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.335 [INFO][4284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.335 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.335 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.342 [WARNING][4284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.342 [INFO][4284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.343 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:24.347587 containerd[1498]: 2025-09-12 17:34:24.345 [INFO][4277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:24.348087 containerd[1498]: time="2025-09-12T17:34:24.347802491Z" level=info msg="TearDown network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" successfully" Sep 12 17:34:24.348087 containerd[1498]: time="2025-09-12T17:34:24.347841055Z" level=info msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" returns successfully" Sep 12 17:34:24.351281 containerd[1498]: time="2025-09-12T17:34:24.350778122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-dswbl,Uid:bbff376f-9ddd-4344-b41b-dd9ac22821d6,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:34:24.351410 systemd[1]: run-netns-cni\x2d8f4ea353\x2d062a\x2d8986\x2d0d19\x2dbd606b442493.mount: Deactivated successfully. Sep 12 17:34:24.459644 systemd-networkd[1386]: calia925c21e3e2: Link UP Sep 12 17:34:24.460819 systemd-networkd[1386]: calia925c21e3e2: Gained carrier Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.399 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0 calico-apiserver-65c8b64669- calico-apiserver bbff376f-9ddd-4344-b41b-dd9ac22821d6 925 0 2025-09-12 17:33:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65c8b64669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f calico-apiserver-65c8b64669-dswbl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia925c21e3e2 [] [] }} ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.399 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.425 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.425 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-c-e429241c3f", "pod":"calico-apiserver-65c8b64669-dswbl", "timestamp":"2025-09-12 17:34:24.425093433 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.425 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.425 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.425 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.431 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.435 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.439 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.440 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.442 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.442 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.443 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083 Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.450 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.454 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.67/26] block=192.168.109.64/26 handle="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.455 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.67/26] handle="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.455 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:24.482720 containerd[1498]: 2025-09-12 17:34:24.455 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.67/26] IPv6=[] ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.457 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff376f-9ddd-4344-b41b-dd9ac22821d6", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"calico-apiserver-65c8b64669-dswbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia925c21e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.457 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.67/32] ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.457 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia925c21e3e2 ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.462 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.463 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff376f-9ddd-4344-b41b-dd9ac22821d6", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083", Pod:"calico-apiserver-65c8b64669-dswbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia925c21e3e2", MAC:"0a:75:f2:0f:04:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:24.483788 containerd[1498]: 2025-09-12 17:34:24.478 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-dswbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:24.504905 containerd[1498]: time="2025-09-12T17:34:24.504702079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:24.504905 containerd[1498]: time="2025-09-12T17:34:24.504759168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:24.504905 containerd[1498]: time="2025-09-12T17:34:24.504768537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:24.504905 containerd[1498]: time="2025-09-12T17:34:24.504833220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:24.527400 systemd[1]: Started cri-containerd-489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083.scope - libcontainer container 489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083. Sep 12 17:34:24.565849 containerd[1498]: time="2025-09-12T17:34:24.565815120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-dswbl,Uid:bbff376f-9ddd-4344-b41b-dd9ac22821d6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\"" Sep 12 17:34:24.891699 systemd-networkd[1386]: cali0db79aea7fc: Gained IPv6LL Sep 12 17:34:25.268070 containerd[1498]: time="2025-09-12T17:34:25.266717945Z" level=info msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" Sep 12 17:34:25.268070 containerd[1498]: time="2025-09-12T17:34:25.267517822Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.333 [INFO][4376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.333 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" iface="eth0" netns="/var/run/netns/cni-becb219f-407f-273b-80ea-afe570c2230d" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.334 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" iface="eth0" netns="/var/run/netns/cni-becb219f-407f-273b-80ea-afe570c2230d" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.334 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" iface="eth0" netns="/var/run/netns/cni-becb219f-407f-273b-80ea-afe570c2230d" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.334 [INFO][4376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.334 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.355 [INFO][4394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.356 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.356 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.362 [WARNING][4394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.362 [INFO][4394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.363 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:25.368551 containerd[1498]: 2025-09-12 17:34:25.365 [INFO][4376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:25.371506 containerd[1498]: time="2025-09-12T17:34:25.371184116Z" level=info msg="TearDown network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" successfully" Sep 12 17:34:25.371506 containerd[1498]: time="2025-09-12T17:34:25.371210215Z" level=info msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" returns successfully" Sep 12 17:34:25.372663 containerd[1498]: time="2025-09-12T17:34:25.372110815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675d4f9887-6qpj9,Uid:71dc886e-53d6-4485-9dc3-23a94916f815,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:25.372426 systemd[1]: run-netns-cni\x2dbecb219f\x2d407f\x2d273b\x2d80ea\x2dafe570c2230d.mount: Deactivated successfully. Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.326 [INFO][4375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.326 [INFO][4375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="/var/run/netns/cni-0dc15832-6b7b-0d90-5237-941075eaa452" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.327 [INFO][4375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="/var/run/netns/cni-0dc15832-6b7b-0d90-5237-941075eaa452" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.327 [INFO][4375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="/var/run/netns/cni-0dc15832-6b7b-0d90-5237-941075eaa452" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.327 [INFO][4375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.327 [INFO][4375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.364 [INFO][4389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.364 [INFO][4389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.364 [INFO][4389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.374 [WARNING][4389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.374 [INFO][4389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.376 [INFO][4389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:25.380501 containerd[1498]: 2025-09-12 17:34:25.378 [INFO][4375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:25.382877 containerd[1498]: time="2025-09-12T17:34:25.380967792Z" level=info msg="TearDown network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" successfully" Sep 12 17:34:25.382877 containerd[1498]: time="2025-09-12T17:34:25.380991828Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" returns successfully" Sep 12 17:34:25.382877 containerd[1498]: time="2025-09-12T17:34:25.381953854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-gr9jn,Uid:da5d1853-6f02-4cda-8e7a-51c5e86f7848,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:34:25.384896 systemd[1]: run-netns-cni\x2d0dc15832\x2d6b7b\x2d0d90\x2d5237\x2d941075eaa452.mount: Deactivated successfully. Sep 12 17:34:25.511062 systemd-networkd[1386]: cali85291b51178: Link UP Sep 12 17:34:25.513611 systemd-networkd[1386]: cali85291b51178: Gained carrier Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.431 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0 calico-kube-controllers-675d4f9887- calico-system 71dc886e-53d6-4485-9dc3-23a94916f815 935 0 2025-09-12 17:34:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:675d4f9887 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f calico-kube-controllers-675d4f9887-6qpj9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85291b51178 [] [] }} ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.431 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.464 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" HandleID="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.464 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" HandleID="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"calico-kube-controllers-675d4f9887-6qpj9", "timestamp":"2025-09-12 17:34:25.46452585 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.464 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.464 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.464 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.472 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.476 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.481 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.483 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.486 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.486 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.488 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.491 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.498 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.68/26] block=192.168.109.64/26 handle="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.498 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.68/26] handle="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.498 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:25.532750 containerd[1498]: 2025-09-12 17:34:25.498 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.68/26] IPv6=[] ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" HandleID="k8s-pod-network.1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.503 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0", GenerateName:"calico-kube-controllers-675d4f9887-", Namespace:"calico-system", SelfLink:"", UID:"71dc886e-53d6-4485-9dc3-23a94916f815", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675d4f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"calico-kube-controllers-675d4f9887-6qpj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85291b51178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.504 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.68/32] ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.504 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85291b51178 ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.512 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.512 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0", GenerateName:"calico-kube-controllers-675d4f9887-", Namespace:"calico-system", SelfLink:"", UID:"71dc886e-53d6-4485-9dc3-23a94916f815", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675d4f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc", Pod:"calico-kube-controllers-675d4f9887-6qpj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85291b51178", MAC:"a6:9f:c1:e0:75:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.536712 containerd[1498]: 2025-09-12 17:34:25.522 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc" Namespace="calico-system" Pod="calico-kube-controllers-675d4f9887-6qpj9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:25.569261 containerd[1498]: time="2025-09-12T17:34:25.569174499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.569511 containerd[1498]: time="2025-09-12T17:34:25.569242097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.569511 containerd[1498]: time="2025-09-12T17:34:25.569259670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.569511 containerd[1498]: time="2025-09-12T17:34:25.569380331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.599502 systemd[1]: Started cri-containerd-1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc.scope - libcontainer container 1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc. Sep 12 17:34:25.615699 systemd-networkd[1386]: cali9dda2b395b3: Link UP Sep 12 17:34:25.616085 systemd-networkd[1386]: cali9dda2b395b3: Gained carrier Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.441 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0 calico-apiserver-65c8b64669- calico-apiserver da5d1853-6f02-4cda-8e7a-51c5e86f7848 934 0 2025-09-12 17:33:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65c8b64669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f calico-apiserver-65c8b64669-gr9jn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dda2b395b3 [] [] }} ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.442 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.468 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.468 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-c-e429241c3f", "pod":"calico-apiserver-65c8b64669-gr9jn", "timestamp":"2025-09-12 17:34:25.468410415 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.468 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.499 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.499 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.576 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.581 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.587 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.589 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.595 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.595 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.597 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632 Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.601 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.607 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.69/26] block=192.168.109.64/26 handle="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.607 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.69/26] handle="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.607 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:25.638035 containerd[1498]: 2025-09-12 17:34:25.607 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.69/26] IPv6=[] ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.611 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d1853-6f02-4cda-8e7a-51c5e86f7848", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"calico-apiserver-65c8b64669-gr9jn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dda2b395b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.611 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.69/32] ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.612 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dda2b395b3 ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.613 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.617 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d1853-6f02-4cda-8e7a-51c5e86f7848", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632", Pod:"calico-apiserver-65c8b64669-gr9jn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dda2b395b3", MAC:"2a:a3:31:3b:e2:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:25.639756 containerd[1498]: 2025-09-12 17:34:25.631 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Namespace="calico-apiserver" Pod="calico-apiserver-65c8b64669-gr9jn" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:25.663833 containerd[1498]: time="2025-09-12T17:34:25.663754707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.663833 containerd[1498]: time="2025-09-12T17:34:25.663808879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.664159 containerd[1498]: time="2025-09-12T17:34:25.664005135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.664851 containerd[1498]: time="2025-09-12T17:34:25.664209414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.688427 systemd[1]: Started cri-containerd-4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632.scope - libcontainer container 4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632. Sep 12 17:34:25.694186 containerd[1498]: time="2025-09-12T17:34:25.693117969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675d4f9887-6qpj9,Uid:71dc886e-53d6-4485-9dc3-23a94916f815,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc\"" Sep 12 17:34:25.742832 containerd[1498]: time="2025-09-12T17:34:25.742774882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c8b64669-gr9jn,Uid:da5d1853-6f02-4cda-8e7a-51c5e86f7848,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\"" Sep 12 17:34:26.033625 containerd[1498]: time="2025-09-12T17:34:26.033576251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:26.034415 containerd[1498]: time="2025-09-12T17:34:26.034339497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:34:26.035123 containerd[1498]: time="2025-09-12T17:34:26.035096061Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:26.037532 containerd[1498]: time="2025-09-12T17:34:26.036879534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:26.037532 containerd[1498]: time="2025-09-12T17:34:26.037440856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.702623353s" Sep 12 17:34:26.037532 containerd[1498]: time="2025-09-12T17:34:26.037463780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:34:26.038830 containerd[1498]: time="2025-09-12T17:34:26.038812454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:34:26.039569 containerd[1498]: time="2025-09-12T17:34:26.039551273Z" level=info msg="CreateContainer within sandbox \"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:34:26.048895 containerd[1498]: time="2025-09-12T17:34:26.048845050Z" level=info msg="CreateContainer within sandbox \"57302e2c2a82e60969341e7f862866ad4f2c21d591fb233bac91de0924f456cd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5ac999346003d8aeaeddb77b7fbdb99a85a0d257879b96631e61a7c4320a0170\"" Sep 12 17:34:26.049696 containerd[1498]: time="2025-09-12T17:34:26.049681156Z" level=info msg="StartContainer for \"5ac999346003d8aeaeddb77b7fbdb99a85a0d257879b96631e61a7c4320a0170\"" Sep 12 17:34:26.083272 systemd[1]: Started cri-containerd-5ac999346003d8aeaeddb77b7fbdb99a85a0d257879b96631e61a7c4320a0170.scope - libcontainer container 5ac999346003d8aeaeddb77b7fbdb99a85a0d257879b96631e61a7c4320a0170. Sep 12 17:34:26.119532 containerd[1498]: time="2025-09-12T17:34:26.119206265Z" level=info msg="StartContainer for \"5ac999346003d8aeaeddb77b7fbdb99a85a0d257879b96631e61a7c4320a0170\" returns successfully" Sep 12 17:34:26.267482 containerd[1498]: time="2025-09-12T17:34:26.267128583Z" level=info msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" Sep 12 17:34:26.267762 containerd[1498]: time="2025-09-12T17:34:26.267739719Z" level=info msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.331 [INFO][4602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.332 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" iface="eth0" netns="/var/run/netns/cni-62098d66-d450-564e-5030-d9ba985a064e" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.332 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" iface="eth0" netns="/var/run/netns/cni-62098d66-d450-564e-5030-d9ba985a064e" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.333 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" iface="eth0" netns="/var/run/netns/cni-62098d66-d450-564e-5030-d9ba985a064e" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.333 [INFO][4602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.333 [INFO][4602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.360 [INFO][4618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.360 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.360 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.365 [WARNING][4618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.366 [INFO][4618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.367 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:26.370599 containerd[1498]: 2025-09-12 17:34:26.368 [INFO][4602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:26.375215 containerd[1498]: time="2025-09-12T17:34:26.373954780Z" level=info msg="TearDown network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" successfully" Sep 12 17:34:26.375215 containerd[1498]: time="2025-09-12T17:34:26.374029633Z" level=info msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" returns successfully" Sep 12 17:34:26.374660 systemd[1]: run-netns-cni\x2d62098d66\x2dd450\x2d564e\x2d5030\x2dd9ba985a064e.mount: Deactivated successfully. Sep 12 17:34:26.375559 containerd[1498]: time="2025-09-12T17:34:26.375502423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4tcw,Uid:262fff4b-d78b-430c-976d-43c3eb6a4adc,Namespace:calico-system,Attempt:1,}" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.324 [INFO][4603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.328 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" iface="eth0" netns="/var/run/netns/cni-ef582de1-399d-d0ff-67d6-55ebb378a07e" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.329 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" iface="eth0" netns="/var/run/netns/cni-ef582de1-399d-d0ff-67d6-55ebb378a07e" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.330 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" iface="eth0" netns="/var/run/netns/cni-ef582de1-399d-d0ff-67d6-55ebb378a07e" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.330 [INFO][4603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.330 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.379 [INFO][4617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.379 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.379 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.385 [WARNING][4617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.385 [INFO][4617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.387 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:26.397930 containerd[1498]: 2025-09-12 17:34:26.389 [INFO][4603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:26.400112 containerd[1498]: time="2025-09-12T17:34:26.400088005Z" level=info msg="TearDown network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" successfully" Sep 12 17:34:26.400194 containerd[1498]: time="2025-09-12T17:34:26.400177786Z" level=info msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" returns successfully" Sep 12 17:34:26.402701 containerd[1498]: time="2025-09-12T17:34:26.402440575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hhxgk,Uid:b5f203f5-5176-4742-9fe4-7f35ee43dc3d,Namespace:kube-system,Attempt:1,}" Sep 12 17:34:26.402534 systemd[1]: run-netns-cni\x2def582de1\x2d399d\x2dd0ff\x2d67d6\x2d55ebb378a07e.mount: Deactivated successfully. Sep 12 17:34:26.491799 systemd-networkd[1386]: calia925c21e3e2: Gained IPv6LL Sep 12 17:34:26.518685 systemd-networkd[1386]: cali1d08c142ad8: Link UP Sep 12 17:34:26.519778 systemd-networkd[1386]: cali1d08c142ad8: Gained carrier Sep 12 17:34:26.537622 kubelet[2549]: I0912 17:34:26.537548 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79c876859b-zzxpj" podStartSLOduration=1.7295561739999998 podStartE2EDuration="6.537528898s" podCreationTimestamp="2025-09-12 17:34:20 +0000 UTC" firstStartedPulling="2025-09-12 17:34:21.230301952 +0000 UTC m=+38.061140834" lastFinishedPulling="2025-09-12 17:34:26.038274677 +0000 UTC m=+42.869113558" observedRunningTime="2025-09-12 17:34:26.512215539 +0000 UTC m=+43.343054420" watchObservedRunningTime="2025-09-12 17:34:26.537528898 +0000 UTC m=+43.368367780" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.438 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0 csi-node-driver- calico-system 262fff4b-d78b-430c-976d-43c3eb6a4adc 951 0 2025-09-12 17:34:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f csi-node-driver-j4tcw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1d08c142ad8 [] [] }} ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.438 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.468 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" HandleID="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.468 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" HandleID="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"csi-node-driver-j4tcw", "timestamp":"2025-09-12 17:34:26.468248277 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.468 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.468 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.468 [INFO][4653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.474 [INFO][4653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.480 [INFO][4653] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.483 [INFO][4653] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.485 [INFO][4653] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.487 [INFO][4653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.487 [INFO][4653] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.489 [INFO][4653] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.501 [INFO][4653] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.507 [INFO][4653] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.70/26] block=192.168.109.64/26 handle="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.508 [INFO][4653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.70/26] handle="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.508 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:26.547495 containerd[1498]: 2025-09-12 17:34:26.508 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.70/26] IPv6=[] ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" HandleID="k8s-pod-network.24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.513 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262fff4b-d78b-430c-976d-43c3eb6a4adc", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"csi-node-driver-j4tcw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d08c142ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.513 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.70/32] ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.513 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d08c142ad8 ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.519 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.521 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262fff4b-d78b-430c-976d-43c3eb6a4adc", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b", Pod:"csi-node-driver-j4tcw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d08c142ad8", MAC:"ee:e8:a7:89:b3:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:26.549111 containerd[1498]: 2025-09-12 17:34:26.537 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b" Namespace="calico-system" Pod="csi-node-driver-j4tcw" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:26.570438 containerd[1498]: time="2025-09-12T17:34:26.570347873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:26.570438 containerd[1498]: time="2025-09-12T17:34:26.570413760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:26.571532 containerd[1498]: time="2025-09-12T17:34:26.570768837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:26.571532 containerd[1498]: time="2025-09-12T17:34:26.571450428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:26.589301 systemd[1]: Started cri-containerd-24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b.scope - libcontainer container 24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b. Sep 12 17:34:26.613937 systemd-networkd[1386]: cali84d58f1c53a: Link UP Sep 12 17:34:26.614083 systemd-networkd[1386]: cali84d58f1c53a: Gained carrier Sep 12 17:34:26.626215 containerd[1498]: time="2025-09-12T17:34:26.626052635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4tcw,Uid:262fff4b-d78b-430c-976d-43c3eb6a4adc,Namespace:calico-system,Attempt:1,} returns sandbox id \"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b\"" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.451 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0 coredns-7c65d6cfc9- kube-system b5f203f5-5176-4742-9fe4-7f35ee43dc3d 950 0 2025-09-12 17:33:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f coredns-7c65d6cfc9-hhxgk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84d58f1c53a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.451 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.478 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" HandleID="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.478 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" HandleID="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"coredns-7c65d6cfc9-hhxgk", "timestamp":"2025-09-12 17:34:26.478232532 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.478 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.508 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.508 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.575 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.583 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.588 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.590 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.592 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.592 [INFO][4660] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.593 [INFO][4660] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.598 [INFO][4660] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.603 [INFO][4660] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.71/26] block=192.168.109.64/26 handle="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.603 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.71/26] handle="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.603 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:26.631481 containerd[1498]: 2025-09-12 17:34:26.603 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.71/26] IPv6=[] ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" HandleID="k8s-pod-network.e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.611 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5f203f5-5176-4742-9fe4-7f35ee43dc3d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"coredns-7c65d6cfc9-hhxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d58f1c53a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.611 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.71/32] ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.611 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84d58f1c53a ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.613 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.616 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5f203f5-5176-4742-9fe4-7f35ee43dc3d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee", Pod:"coredns-7c65d6cfc9-hhxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d58f1c53a", MAC:"62:93:76:16:95:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:26.633391 containerd[1498]: 2025-09-12 17:34:26.629 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hhxgk" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:26.654142 containerd[1498]: time="2025-09-12T17:34:26.654061660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:26.654142 containerd[1498]: time="2025-09-12T17:34:26.654116645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:26.654643 containerd[1498]: time="2025-09-12T17:34:26.654485920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:26.654643 containerd[1498]: time="2025-09-12T17:34:26.654605037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:26.670279 systemd[1]: Started cri-containerd-e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee.scope - libcontainer container e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee. Sep 12 17:34:26.708390 containerd[1498]: time="2025-09-12T17:34:26.708351551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hhxgk,Uid:b5f203f5-5176-4742-9fe4-7f35ee43dc3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee\"" Sep 12 17:34:26.711770 containerd[1498]: time="2025-09-12T17:34:26.711747964Z" level=info msg="CreateContainer within sandbox \"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:34:26.728609 containerd[1498]: time="2025-09-12T17:34:26.728560821Z" level=info msg="CreateContainer within sandbox \"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"046abb0dde2d86c0421b5d38faffdf0420aba59f617e920ff948902a9e91d229\"" Sep 12 17:34:26.729403 containerd[1498]: time="2025-09-12T17:34:26.729387288Z" level=info msg="StartContainer for \"046abb0dde2d86c0421b5d38faffdf0420aba59f617e920ff948902a9e91d229\"" Sep 12 17:34:26.749339 systemd[1]: Started cri-containerd-046abb0dde2d86c0421b5d38faffdf0420aba59f617e920ff948902a9e91d229.scope - libcontainer container 046abb0dde2d86c0421b5d38faffdf0420aba59f617e920ff948902a9e91d229. Sep 12 17:34:26.773139 containerd[1498]: time="2025-09-12T17:34:26.772958745Z" level=info msg="StartContainer for \"046abb0dde2d86c0421b5d38faffdf0420aba59f617e920ff948902a9e91d229\" returns successfully" Sep 12 17:34:26.875330 systemd-networkd[1386]: cali85291b51178: Gained IPv6LL Sep 12 17:34:27.267378 containerd[1498]: time="2025-09-12T17:34:27.267185957Z" level=info msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" Sep 12 17:34:27.323353 systemd-networkd[1386]: cali9dda2b395b3: Gained IPv6LL Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" iface="eth0" netns="/var/run/netns/cni-5c1bc60c-4ffa-01c9-9568-3c3d265a44fd" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" iface="eth0" netns="/var/run/netns/cni-5c1bc60c-4ffa-01c9-9568-3c3d265a44fd" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" iface="eth0" netns="/var/run/netns/cni-5c1bc60c-4ffa-01c9-9568-3c3d265a44fd" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.309 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.338 [INFO][4830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.338 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.338 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.344 [WARNING][4830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.344 [INFO][4830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.345 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:27.349025 containerd[1498]: 2025-09-12 17:34:27.346 [INFO][4823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:27.352209 containerd[1498]: time="2025-09-12T17:34:27.349327719Z" level=info msg="TearDown network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" successfully" Sep 12 17:34:27.352209 containerd[1498]: time="2025-09-12T17:34:27.349364047Z" level=info msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" returns successfully" Sep 12 17:34:27.352209 containerd[1498]: time="2025-09-12T17:34:27.351645921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-p9sbl,Uid:37ecfd1d-6ac6-4ad3-9078-efcb50051c47,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:34:27.351909 systemd[1]: run-netns-cni\x2d5c1bc60c\x2d4ffa\x2d01c9\x2d9568\x2d3c3d265a44fd.mount: Deactivated successfully. Sep 12 17:34:27.458907 systemd-networkd[1386]: cali0b05186054c: Link UP Sep 12 17:34:27.460481 systemd-networkd[1386]: cali0b05186054c: Gained carrier Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.398 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0 calico-apiserver-5dcc9d7764- calico-apiserver 37ecfd1d-6ac6-4ad3-9078-efcb50051c47 970 0 2025-09-12 17:34:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcc9d7764 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f calico-apiserver-5dcc9d7764-p9sbl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0b05186054c [] [] }} ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.398 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.420 [INFO][4849] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" HandleID="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.420 [INFO][4849] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" HandleID="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-c-e429241c3f", "pod":"calico-apiserver-5dcc9d7764-p9sbl", "timestamp":"2025-09-12 17:34:27.42065188 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.420 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.420 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.420 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.427 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.431 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.436 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.438 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.440 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.440 [INFO][4849] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.443 [INFO][4849] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.446 [INFO][4849] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.453 [INFO][4849] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.72/26] block=192.168.109.64/26 handle="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.453 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.72/26] handle="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.454 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:27.485332 containerd[1498]: 2025-09-12 17:34:27.454 [INFO][4849] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.72/26] IPv6=[] ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" HandleID="k8s-pod-network.72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.456 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"37ecfd1d-6ac6-4ad3-9078-efcb50051c47", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"calico-apiserver-5dcc9d7764-p9sbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b05186054c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.456 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.72/32] ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.456 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b05186054c ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.460 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.461 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"37ecfd1d-6ac6-4ad3-9078-efcb50051c47", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc", Pod:"calico-apiserver-5dcc9d7764-p9sbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b05186054c", MAC:"7e:87:63:01:e6:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:27.489395 containerd[1498]: 2025-09-12 17:34:27.482 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-p9sbl" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:27.524211 containerd[1498]: time="2025-09-12T17:34:27.523527132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:27.524211 containerd[1498]: time="2025-09-12T17:34:27.523580983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:27.524211 containerd[1498]: time="2025-09-12T17:34:27.523593558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:27.524211 containerd[1498]: time="2025-09-12T17:34:27.523656348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:27.559697 systemd[1]: Started cri-containerd-72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc.scope - libcontainer container 72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc. Sep 12 17:34:27.573347 kubelet[2549]: I0912 17:34:27.573303 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hhxgk" podStartSLOduration=37.573285954 podStartE2EDuration="37.573285954s" podCreationTimestamp="2025-09-12 17:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:27.558569183 +0000 UTC m=+44.389408064" watchObservedRunningTime="2025-09-12 17:34:27.573285954 +0000 UTC m=+44.404124835" Sep 12 17:34:27.667463 containerd[1498]: time="2025-09-12T17:34:27.667403587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-p9sbl,Uid:37ecfd1d-6ac6-4ad3-9078-efcb50051c47,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc\"" Sep 12 17:34:27.835401 systemd-networkd[1386]: cali1d08c142ad8: Gained IPv6LL Sep 12 17:34:28.267034 containerd[1498]: time="2025-09-12T17:34:28.266983524Z" level=info msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" Sep 12 17:34:28.285003 systemd-networkd[1386]: cali84d58f1c53a: Gained IPv6LL Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.329 [INFO][4918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.329 [INFO][4918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" iface="eth0" netns="/var/run/netns/cni-4415c386-6133-7825-10e6-55b5f5fdbe3c" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.330 [INFO][4918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" iface="eth0" netns="/var/run/netns/cni-4415c386-6133-7825-10e6-55b5f5fdbe3c" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.330 [INFO][4918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" iface="eth0" netns="/var/run/netns/cni-4415c386-6133-7825-10e6-55b5f5fdbe3c" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.330 [INFO][4918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.330 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.366 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.367 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.367 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.373 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.374 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.376 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:28.385170 containerd[1498]: 2025-09-12 17:34:28.379 [INFO][4918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:28.385170 containerd[1498]: time="2025-09-12T17:34:28.382977463Z" level=info msg="TearDown network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" successfully" Sep 12 17:34:28.385170 containerd[1498]: time="2025-09-12T17:34:28.383000786Z" level=info msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" returns successfully" Sep 12 17:34:28.385722 systemd[1]: run-netns-cni\x2d4415c386\x2d6133\x2d7825\x2d10e6\x2d55b5f5fdbe3c.mount: Deactivated successfully. Sep 12 17:34:28.386265 containerd[1498]: time="2025-09-12T17:34:28.385899687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n262w,Uid:2038863b-3df2-4d79-ad86-96d22113a91c,Namespace:kube-system,Attempt:1,}" Sep 12 17:34:28.528671 systemd-networkd[1386]: cali28d60b71983: Link UP Sep 12 17:34:28.531278 systemd-networkd[1386]: cali28d60b71983: Gained carrier Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.439 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0 coredns-7c65d6cfc9- kube-system 2038863b-3df2-4d79-ad86-96d22113a91c 985 0 2025-09-12 17:33:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f coredns-7c65d6cfc9-n262w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28d60b71983 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.439 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.480 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" HandleID="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.480 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" HandleID="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-c-e429241c3f", "pod":"coredns-7c65d6cfc9-n262w", "timestamp":"2025-09-12 17:34:28.480714651 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.480 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.481 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.481 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.488 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.492 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.501 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.503 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.505 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.505 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.507 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8 Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.511 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.518 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.73/26] block=192.168.109.64/26 handle="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.519 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.73/26] handle="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.519 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:28.555893 containerd[1498]: 2025-09-12 17:34:28.520 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.73/26] IPv6=[] ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" HandleID="k8s-pod-network.9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.523 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2038863b-3df2-4d79-ad86-96d22113a91c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"coredns-7c65d6cfc9-n262w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28d60b71983", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.524 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.73/32] ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.524 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28d60b71983 ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.531 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.533 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2038863b-3df2-4d79-ad86-96d22113a91c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8", Pod:"coredns-7c65d6cfc9-n262w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28d60b71983", MAC:"06:49:46:13:e3:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:28.559007 containerd[1498]: 2025-09-12 17:34:28.548 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n262w" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:28.594669 containerd[1498]: time="2025-09-12T17:34:28.594588274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:28.594669 containerd[1498]: time="2025-09-12T17:34:28.594634882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:28.594910 containerd[1498]: time="2025-09-12T17:34:28.594872786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:28.595119 containerd[1498]: time="2025-09-12T17:34:28.595053781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:28.603620 systemd-networkd[1386]: cali0b05186054c: Gained IPv6LL Sep 12 17:34:28.620260 systemd[1]: Started cri-containerd-9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8.scope - libcontainer container 9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8. Sep 12 17:34:28.671463 containerd[1498]: time="2025-09-12T17:34:28.671426104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n262w,Uid:2038863b-3df2-4d79-ad86-96d22113a91c,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8\"" Sep 12 17:34:28.676527 containerd[1498]: time="2025-09-12T17:34:28.676364545Z" level=info msg="CreateContainer within sandbox \"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:34:28.700773 containerd[1498]: time="2025-09-12T17:34:28.700725567Z" level=info msg="CreateContainer within sandbox \"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d0e8f8cd6680e8de76e6762a91c870e86129c9e9102766dc8147c8196d1bd61\"" Sep 12 17:34:28.703417 containerd[1498]: time="2025-09-12T17:34:28.703388988Z" level=info msg="StartContainer for \"6d0e8f8cd6680e8de76e6762a91c870e86129c9e9102766dc8147c8196d1bd61\"" Sep 12 17:34:28.735502 systemd[1]: Started cri-containerd-6d0e8f8cd6680e8de76e6762a91c870e86129c9e9102766dc8147c8196d1bd61.scope - libcontainer container 6d0e8f8cd6680e8de76e6762a91c870e86129c9e9102766dc8147c8196d1bd61. Sep 12 17:34:28.788760 containerd[1498]: time="2025-09-12T17:34:28.788672700Z" level=info msg="StartContainer for \"6d0e8f8cd6680e8de76e6762a91c870e86129c9e9102766dc8147c8196d1bd61\" returns successfully" Sep 12 17:34:29.387306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687200347.mount: Deactivated successfully. Sep 12 17:34:29.519910 containerd[1498]: time="2025-09-12T17:34:29.518759133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:34:29.522019 containerd[1498]: time="2025-09-12T17:34:29.521974686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.483077131s" Sep 12 17:34:29.522200 containerd[1498]: time="2025-09-12T17:34:29.522098803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:34:29.524732 containerd[1498]: time="2025-09-12T17:34:29.524666289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:34:29.529796 containerd[1498]: time="2025-09-12T17:34:29.529773471Z" level=info msg="CreateContainer within sandbox \"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:34:29.545615 kubelet[2549]: I0912 17:34:29.545494 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n262w" podStartSLOduration=39.54547599 podStartE2EDuration="39.54547599s" podCreationTimestamp="2025-09-12 17:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:29.543087153 +0000 UTC m=+46.373926034" watchObservedRunningTime="2025-09-12 17:34:29.54547599 +0000 UTC m=+46.376314870" Sep 12 17:34:29.548088 containerd[1498]: time="2025-09-12T17:34:29.547735268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.552429 containerd[1498]: time="2025-09-12T17:34:29.552365651Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.553023 containerd[1498]: time="2025-09-12T17:34:29.552897004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.560502 containerd[1498]: time="2025-09-12T17:34:29.560088450Z" level=info msg="CreateContainer within sandbox \"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a5c41438b2236e10e177f94b10b591ca57c91754dd371550020dfb424bbbde00\"" Sep 12 17:34:29.561662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351110253.mount: Deactivated successfully. Sep 12 17:34:29.565566 containerd[1498]: time="2025-09-12T17:34:29.565541490Z" level=info msg="StartContainer for \"a5c41438b2236e10e177f94b10b591ca57c91754dd371550020dfb424bbbde00\"" Sep 12 17:34:29.645794 systemd[1]: Started cri-containerd-a5c41438b2236e10e177f94b10b591ca57c91754dd371550020dfb424bbbde00.scope - libcontainer container a5c41438b2236e10e177f94b10b591ca57c91754dd371550020dfb424bbbde00. Sep 12 17:34:29.703087 containerd[1498]: time="2025-09-12T17:34:29.703032575Z" level=info msg="StartContainer for \"a5c41438b2236e10e177f94b10b591ca57c91754dd371550020dfb424bbbde00\" returns successfully" Sep 12 17:34:30.395373 systemd-networkd[1386]: cali28d60b71983: Gained IPv6LL Sep 12 17:34:30.559546 kubelet[2549]: I0912 17:34:30.559353 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gbp7h" podStartSLOduration=23.636654096 podStartE2EDuration="29.559329199s" podCreationTimestamp="2025-09-12 17:34:01 +0000 UTC" firstStartedPulling="2025-09-12 17:34:23.600743637 +0000 UTC m=+40.431582528" lastFinishedPulling="2025-09-12 17:34:29.52341875 +0000 UTC m=+46.354257631" observedRunningTime="2025-09-12 17:34:30.557826234 +0000 UTC m=+47.388665114" watchObservedRunningTime="2025-09-12 17:34:30.559329199 +0000 UTC m=+47.390168090" Sep 12 17:34:31.813342 kubelet[2549]: I0912 17:34:31.813285 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:32.636352 containerd[1498]: time="2025-09-12T17:34:32.636307320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:32.637728 containerd[1498]: time="2025-09-12T17:34:32.637359375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:34:32.640043 containerd[1498]: time="2025-09-12T17:34:32.638452119Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:32.640664 containerd[1498]: time="2025-09-12T17:34:32.640624350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.115903557s" Sep 12 17:34:32.640723 containerd[1498]: time="2025-09-12T17:34:32.640667502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:34:32.642376 containerd[1498]: time="2025-09-12T17:34:32.642356090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:32.652677 containerd[1498]: time="2025-09-12T17:34:32.652646687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:34:32.655991 containerd[1498]: time="2025-09-12T17:34:32.655937991Z" level=info msg="CreateContainer within sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:32.691128 containerd[1498]: time="2025-09-12T17:34:32.689600753Z" level=info msg="CreateContainer within sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\"" Sep 12 17:34:32.691128 containerd[1498]: time="2025-09-12T17:34:32.690307931Z" level=info msg="StartContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\"" Sep 12 17:34:32.724302 systemd[1]: Started cri-containerd-8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630.scope - libcontainer container 8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630. Sep 12 17:34:32.770552 containerd[1498]: time="2025-09-12T17:34:32.770486886Z" level=info msg="StartContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" returns successfully" Sep 12 17:34:34.666855 kubelet[2549]: I0912 17:34:34.657167 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65c8b64669-dswbl" podStartSLOduration=27.553128095 podStartE2EDuration="35.636647267s" podCreationTimestamp="2025-09-12 17:33:59 +0000 UTC" firstStartedPulling="2025-09-12 17:34:24.568930268 +0000 UTC m=+41.399769150" lastFinishedPulling="2025-09-12 17:34:32.652449441 +0000 UTC m=+49.483288322" observedRunningTime="2025-09-12 17:34:33.577034645 +0000 UTC m=+50.407873526" watchObservedRunningTime="2025-09-12 17:34:34.636647267 +0000 UTC m=+51.467486148" Sep 12 17:34:36.629199 containerd[1498]: time="2025-09-12T17:34:36.628346754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.630913 containerd[1498]: time="2025-09-12T17:34:36.630840195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:34:36.631933 containerd[1498]: time="2025-09-12T17:34:36.631861370Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.644644 containerd[1498]: time="2025-09-12T17:34:36.644586115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.646411 containerd[1498]: time="2025-09-12T17:34:36.646386836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.993680885s" Sep 12 17:34:36.646505 containerd[1498]: time="2025-09-12T17:34:36.646489932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:34:36.668358 containerd[1498]: time="2025-09-12T17:34:36.668322298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:34:36.815246 containerd[1498]: time="2025-09-12T17:34:36.815200325Z" level=info msg="CreateContainer within sandbox \"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:34:36.846281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106533631.mount: Deactivated successfully. Sep 12 17:34:36.848238 containerd[1498]: time="2025-09-12T17:34:36.847372931Z" level=info msg="CreateContainer within sandbox \"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce\"" Sep 12 17:34:36.854592 containerd[1498]: time="2025-09-12T17:34:36.854548172Z" level=info msg="StartContainer for \"7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce\"" Sep 12 17:34:36.913693 systemd[1]: Started cri-containerd-7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce.scope - libcontainer container 7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce. Sep 12 17:34:36.990443 containerd[1498]: time="2025-09-12T17:34:36.990235959Z" level=info msg="StartContainer for \"7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce\" returns successfully" Sep 12 17:34:37.176720 containerd[1498]: time="2025-09-12T17:34:37.176473732Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.178496 containerd[1498]: time="2025-09-12T17:34:37.178468706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:34:37.182557 containerd[1498]: time="2025-09-12T17:34:37.182535150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 513.242384ms" Sep 12 17:34:37.182640 containerd[1498]: time="2025-09-12T17:34:37.182621895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:34:37.184526 containerd[1498]: time="2025-09-12T17:34:37.184510647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:34:37.185909 containerd[1498]: time="2025-09-12T17:34:37.185878431Z" level=info msg="CreateContainer within sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:37.220659 containerd[1498]: time="2025-09-12T17:34:37.219580709Z" level=info msg="CreateContainer within sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\"" Sep 12 17:34:37.225167 containerd[1498]: time="2025-09-12T17:34:37.222087685Z" level=info msg="StartContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\"" Sep 12 17:34:37.319276 systemd[1]: Started cri-containerd-65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4.scope - libcontainer container 65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4. Sep 12 17:34:37.394283 containerd[1498]: time="2025-09-12T17:34:37.394246566Z" level=info msg="StartContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" returns successfully" Sep 12 17:34:38.079723 kubelet[2549]: I0912 17:34:38.010968 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-675d4f9887-6qpj9" podStartSLOduration=24.976500976 podStartE2EDuration="35.950137282s" podCreationTimestamp="2025-09-12 17:34:02 +0000 UTC" firstStartedPulling="2025-09-12 17:34:25.694418022 +0000 UTC m=+42.525256902" lastFinishedPulling="2025-09-12 17:34:36.668054317 +0000 UTC m=+53.498893208" observedRunningTime="2025-09-12 17:34:37.930117716 +0000 UTC m=+54.760956598" watchObservedRunningTime="2025-09-12 17:34:37.950137282 +0000 UTC m=+54.780976162" Sep 12 17:34:38.093084 kubelet[2549]: I0912 17:34:38.092428 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65c8b64669-gr9jn" podStartSLOduration=27.652748674 podStartE2EDuration="39.092412034s" podCreationTimestamp="2025-09-12 17:33:59 +0000 UTC" firstStartedPulling="2025-09-12 17:34:25.744630173 +0000 UTC m=+42.575469055" lastFinishedPulling="2025-09-12 17:34:37.184293524 +0000 UTC m=+54.015132415" observedRunningTime="2025-09-12 17:34:38.092072413 +0000 UTC m=+54.922911294" watchObservedRunningTime="2025-09-12 17:34:38.092412034 +0000 UTC m=+54.923250915" Sep 12 17:34:38.934794 kubelet[2549]: I0912 17:34:38.934730 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:40.407172 containerd[1498]: time="2025-09-12T17:34:40.405316401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:40.407172 containerd[1498]: time="2025-09-12T17:34:40.406788298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:34:40.416782 containerd[1498]: time="2025-09-12T17:34:40.416380914Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:40.419839 containerd[1498]: time="2025-09-12T17:34:40.419811218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:40.420787 containerd[1498]: time="2025-09-12T17:34:40.420761410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.236167005s" Sep 12 17:34:40.421213 containerd[1498]: time="2025-09-12T17:34:40.420864770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:34:40.425237 containerd[1498]: time="2025-09-12T17:34:40.425058163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:34:40.448012 containerd[1498]: time="2025-09-12T17:34:40.447975674Z" level=info msg="CreateContainer within sandbox \"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:34:40.474805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251368064.mount: Deactivated successfully. Sep 12 17:34:40.482156 containerd[1498]: time="2025-09-12T17:34:40.482100409Z" level=info msg="CreateContainer within sandbox \"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"95a50b274ce1720fd1696eff9c5649401e1e226a07aaa883faa85de9f28a9805\"" Sep 12 17:34:40.483244 containerd[1498]: time="2025-09-12T17:34:40.482762151Z" level=info msg="StartContainer for \"95a50b274ce1720fd1696eff9c5649401e1e226a07aaa883faa85de9f28a9805\"" Sep 12 17:34:40.525265 systemd[1]: Started cri-containerd-95a50b274ce1720fd1696eff9c5649401e1e226a07aaa883faa85de9f28a9805.scope - libcontainer container 95a50b274ce1720fd1696eff9c5649401e1e226a07aaa883faa85de9f28a9805. Sep 12 17:34:40.553756 containerd[1498]: time="2025-09-12T17:34:40.552892689Z" level=info msg="StartContainer for \"95a50b274ce1720fd1696eff9c5649401e1e226a07aaa883faa85de9f28a9805\" returns successfully" Sep 12 17:34:40.896363 containerd[1498]: time="2025-09-12T17:34:40.896297866Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:40.898782 containerd[1498]: time="2025-09-12T17:34:40.898732477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:34:40.900020 containerd[1498]: time="2025-09-12T17:34:40.899982607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 474.892376ms" Sep 12 17:34:40.900020 containerd[1498]: time="2025-09-12T17:34:40.900014526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:34:40.901565 containerd[1498]: time="2025-09-12T17:34:40.901135630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:34:40.903369 containerd[1498]: time="2025-09-12T17:34:40.903336633Z" level=info msg="CreateContainer within sandbox \"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:40.924384 containerd[1498]: time="2025-09-12T17:34:40.924340086Z" level=info msg="CreateContainer within sandbox \"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baa021f4328ed83dd86b6c796a7c06dde7a938a8c2e2501d7dfff18b9b452885\"" Sep 12 17:34:40.926825 containerd[1498]: time="2025-09-12T17:34:40.926783985Z" level=info msg="StartContainer for \"baa021f4328ed83dd86b6c796a7c06dde7a938a8c2e2501d7dfff18b9b452885\"" Sep 12 17:34:40.986301 systemd[1]: Started cri-containerd-baa021f4328ed83dd86b6c796a7c06dde7a938a8c2e2501d7dfff18b9b452885.scope - libcontainer container baa021f4328ed83dd86b6c796a7c06dde7a938a8c2e2501d7dfff18b9b452885. Sep 12 17:34:41.070165 containerd[1498]: time="2025-09-12T17:34:41.070012900Z" level=info msg="StartContainer for \"baa021f4328ed83dd86b6c796a7c06dde7a938a8c2e2501d7dfff18b9b452885\" returns successfully" Sep 12 17:34:42.074870 kubelet[2549]: I0912 17:34:42.068081 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcc9d7764-p9sbl" podStartSLOduration=28.835913474 podStartE2EDuration="42.068060175s" podCreationTimestamp="2025-09-12 17:34:00 +0000 UTC" firstStartedPulling="2025-09-12 17:34:27.668701503 +0000 UTC m=+44.499540384" lastFinishedPulling="2025-09-12 17:34:40.900848203 +0000 UTC m=+57.731687085" observedRunningTime="2025-09-12 17:34:42.039096907 +0000 UTC m=+58.869935809" watchObservedRunningTime="2025-09-12 17:34:42.068060175 +0000 UTC m=+58.898899055" Sep 12 17:34:43.571012 kubelet[2549]: I0912 17:34:43.569893 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:43.612884 containerd[1498]: time="2025-09-12T17:34:43.612778177Z" level=info msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" Sep 12 17:34:43.742292 systemd[1]: Created slice kubepods-besteffort-pod5ee0dacb_ec72_45e8_b0dc_f56af3f4866c.slice - libcontainer container kubepods-besteffort-pod5ee0dacb_ec72_45e8_b0dc_f56af3f4866c.slice. Sep 12 17:34:43.863197 kubelet[2549]: I0912 17:34:43.863057 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqdb\" (UniqueName: \"kubernetes.io/projected/5ee0dacb-ec72-45e8-b0dc-f56af3f4866c-kube-api-access-vlqdb\") pod \"calico-apiserver-5dcc9d7764-2rq9m\" (UID: \"5ee0dacb-ec72-45e8-b0dc-f56af3f4866c\") " pod="calico-apiserver/calico-apiserver-5dcc9d7764-2rq9m" Sep 12 17:34:43.870188 kubelet[2549]: I0912 17:34:43.866067 2549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ee0dacb-ec72-45e8-b0dc-f56af3f4866c-calico-apiserver-certs\") pod \"calico-apiserver-5dcc9d7764-2rq9m\" (UID: \"5ee0dacb-ec72-45e8-b0dc-f56af3f4866c\") " pod="calico-apiserver/calico-apiserver-5dcc9d7764-2rq9m" Sep 12 17:34:43.882154 containerd[1498]: time="2025-09-12T17:34:43.881402851Z" level=info msg="StopContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" with timeout 30 (s)" Sep 12 17:34:43.890403 containerd[1498]: time="2025-09-12T17:34:43.889957333Z" level=info msg="Stop container \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" with signal terminated" Sep 12 17:34:44.054246 systemd[1]: cri-containerd-65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4.scope: Deactivated successfully. Sep 12 17:34:44.200347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4-rootfs.mount: Deactivated successfully. Sep 12 17:34:44.213808 containerd[1498]: time="2025-09-12T17:34:44.203312956Z" level=info msg="shim disconnected" id=65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4 namespace=k8s.io Sep 12 17:34:44.221244 containerd[1498]: time="2025-09-12T17:34:44.221205431Z" level=warning msg="cleaning up after shim disconnected" id=65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4 namespace=k8s.io Sep 12 17:34:44.221362 containerd[1498]: time="2025-09-12T17:34:44.221348775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:44.376997 containerd[1498]: time="2025-09-12T17:34:44.376950003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-2rq9m,Uid:5ee0dacb-ec72-45e8-b0dc-f56af3f4866c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:34:44.516592 containerd[1498]: time="2025-09-12T17:34:44.516138138Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:34:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.184 [WARNING][5473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2038863b-3df2-4d79-ad86-96d22113a91c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8", Pod:"coredns-7c65d6cfc9-n262w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28d60b71983", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.197 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.197 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" iface="eth0" netns="" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.197 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.197 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.513 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.518 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.519 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.559 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.559 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.562 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:44.581251 containerd[1498]: 2025-09-12 17:34:44.566 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.581251 containerd[1498]: time="2025-09-12T17:34:44.579775623Z" level=info msg="TearDown network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" successfully" Sep 12 17:34:44.581251 containerd[1498]: time="2025-09-12T17:34:44.579829322Z" level=info msg="StopPodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" returns successfully" Sep 12 17:34:44.644658 containerd[1498]: time="2025-09-12T17:34:44.643788560Z" level=info msg="RemovePodSandbox for \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" Sep 12 17:34:44.644658 containerd[1498]: time="2025-09-12T17:34:44.643828272Z" level=info msg="Forcibly stopping sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\"" Sep 12 17:34:44.651623 containerd[1498]: time="2025-09-12T17:34:44.650764542Z" level=info msg="StopContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" returns successfully" Sep 12 17:34:44.700054 containerd[1498]: time="2025-09-12T17:34:44.699363673Z" level=info msg="StopPodSandbox for \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\"" Sep 12 17:34:44.720904 containerd[1498]: time="2025-09-12T17:34:44.720745486Z" level=info msg="Container to stop \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:34:44.778316 systemd[1]: cri-containerd-4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632.scope: Deactivated successfully. Sep 12 17:34:44.937885 systemd-networkd[1386]: cali92f1eb9d56d: Link UP Sep 12 17:34:44.938054 systemd-networkd[1386]: cali92f1eb9d56d: Gained carrier Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.783 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2038863b-3df2-4d79-ad86-96d22113a91c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"9f97b458abd922e4feb99c03d8b9ea72293ada4c8e7077cee37ac3b0f9592ce8", Pod:"coredns-7c65d6cfc9-n262w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28d60b71983", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.784 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.784 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" iface="eth0" netns="" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.784 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.784 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.847 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.847 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.892 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.907 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.908 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" HandleID="k8s-pod-network.cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--n262w-eth0" Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.910 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:44.956625 containerd[1498]: 2025-09-12 17:34:44.929 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20" Sep 12 17:34:44.956625 containerd[1498]: time="2025-09-12T17:34:44.956535828Z" level=info msg="TearDown network for sandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" successfully" Sep 12 17:34:44.971129 containerd[1498]: time="2025-09-12T17:34:44.971083769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:44.971214 containerd[1498]: time="2025-09-12T17:34:44.971198069Z" level=info msg="RemovePodSandbox \"cef41a080d4627d79f5f46198ad248f09c5b265eb35788634f4da47290ceec20\" returns successfully" Sep 12 17:34:44.973054 containerd[1498]: time="2025-09-12T17:34:44.972496668Z" level=info msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.691 [INFO][5528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0 calico-apiserver-5dcc9d7764- calico-apiserver 5ee0dacb-ec72-45e8-b0dc-f56af3f4866c 1127 0 2025-09-12 17:34:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcc9d7764 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-c-e429241c3f calico-apiserver-5dcc9d7764-2rq9m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92f1eb9d56d [] [] }} ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.691 [INFO][5528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.798 [INFO][5548] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" HandleID="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.798 [INFO][5548] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" HandleID="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fa780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-c-e429241c3f", "pod":"calico-apiserver-5dcc9d7764-2rq9m", "timestamp":"2025-09-12 17:34:44.798831801 +0000 UTC"}, Hostname:"ci-4081-3-6-c-e429241c3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.798 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.799 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.799 [INFO][5548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-c-e429241c3f' Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.824 [INFO][5548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.851 [INFO][5548] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.858 [INFO][5548] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.861 [INFO][5548] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.863 [INFO][5548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.868 [INFO][5548] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.872 [INFO][5548] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.879 [INFO][5548] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.891 [INFO][5548] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.74/26] block=192.168.109.64/26 handle="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.892 [INFO][5548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.74/26] handle="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" host="ci-4081-3-6-c-e429241c3f" Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.892 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:45.010202 containerd[1498]: 2025-09-12 17:34:44.892 [INFO][5548] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.74/26] IPv6=[] ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" HandleID="k8s-pod-network.63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.905 [INFO][5528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ee0dacb-ec72-45e8-b0dc-f56af3f4866c", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"", Pod:"calico-apiserver-5dcc9d7764-2rq9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92f1eb9d56d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.907 [INFO][5528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.74/32] ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.907 [INFO][5528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92f1eb9d56d ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.941 [INFO][5528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.943 [INFO][5528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ee0dacb-ec72-45e8-b0dc-f56af3f4866c", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d", Pod:"calico-apiserver-5dcc9d7764-2rq9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92f1eb9d56d", MAC:"8a:d1:d2:57:3e:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:45.014045 containerd[1498]: 2025-09-12 17:34:44.986 [INFO][5528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d" Namespace="calico-apiserver" Pod="calico-apiserver-5dcc9d7764-2rq9m" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--2rq9m-eth0" Sep 12 17:34:45.036293 containerd[1498]: time="2025-09-12T17:34:45.036126639Z" level=info msg="shim disconnected" id=4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632 namespace=k8s.io Sep 12 17:34:45.036460 containerd[1498]: time="2025-09-12T17:34:45.036435107Z" level=warning msg="cleaning up after shim disconnected" id=4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632 namespace=k8s.io Sep 12 17:34:45.036522 containerd[1498]: time="2025-09-12T17:34:45.036508582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:45.083384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632-rootfs.mount: Deactivated successfully. Sep 12 17:34:45.083479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632-shm.mount: Deactivated successfully. Sep 12 17:34:45.274246 containerd[1498]: time="2025-09-12T17:34:45.274172491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:45.275804 containerd[1498]: time="2025-09-12T17:34:45.275741740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:45.276354 containerd[1498]: time="2025-09-12T17:34:45.276331325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:45.292174 containerd[1498]: time="2025-09-12T17:34:45.289166117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:45.415329 systemd[1]: Started cri-containerd-63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d.scope - libcontainer container 63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d. Sep 12 17:34:45.495161 containerd[1498]: time="2025-09-12T17:34:45.494828202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.202 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0", GenerateName:"calico-kube-controllers-675d4f9887-", Namespace:"calico-system", SelfLink:"", UID:"71dc886e-53d6-4485-9dc3-23a94916f815", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675d4f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc", Pod:"calico-kube-controllers-675d4f9887-6qpj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85291b51178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.204 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.204 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" iface="eth0" netns="" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.204 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.204 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.438 [INFO][5615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.440 [INFO][5615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.440 [INFO][5615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.465 [WARNING][5615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.465 [INFO][5615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.468 [INFO][5615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:45.495161 containerd[1498]: 2025-09-12 17:34:45.484 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.495161 containerd[1498]: time="2025-09-12T17:34:45.494928767Z" level=info msg="TearDown network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" successfully" Sep 12 17:34:45.495161 containerd[1498]: time="2025-09-12T17:34:45.494940369Z" level=info msg="StopPodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" returns successfully" Sep 12 17:34:45.496055 containerd[1498]: time="2025-09-12T17:34:45.495427105Z" level=info msg="RemovePodSandbox for \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" Sep 12 17:34:45.496055 containerd[1498]: time="2025-09-12T17:34:45.495446952Z" level=info msg="Forcibly stopping sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\"" Sep 12 17:34:45.503162 containerd[1498]: time="2025-09-12T17:34:45.503122595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:45.509978 containerd[1498]: time="2025-09-12T17:34:45.508586715Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:45.514106 containerd[1498]: time="2025-09-12T17:34:45.514060223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:45.516137 containerd[1498]: time="2025-09-12T17:34:45.515866328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.61467118s" Sep 12 17:34:45.516137 containerd[1498]: time="2025-09-12T17:34:45.515893528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:34:45.562199 systemd-networkd[1386]: cali9dda2b395b3: Link DOWN Sep 12 17:34:45.562208 systemd-networkd[1386]: cali9dda2b395b3: Lost carrier Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.574 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0", GenerateName:"calico-kube-controllers-675d4f9887-", Namespace:"calico-system", SelfLink:"", UID:"71dc886e-53d6-4485-9dc3-23a94916f815", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675d4f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"1c76b3db1f0dbef59c19525344e6daf1d8eed9b993383682c2cbe4742c55c3cc", Pod:"calico-kube-controllers-675d4f9887-6qpj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85291b51178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.574 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.574 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" iface="eth0" netns="" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.575 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.575 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.637 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.638 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.640 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.655 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.655 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" HandleID="k8s-pod-network.6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--kube--controllers--675d4f9887--6qpj9-eth0" Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.659 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:45.669989 containerd[1498]: 2025-09-12 17:34:45.665 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc" Sep 12 17:34:45.669989 containerd[1498]: time="2025-09-12T17:34:45.668535962Z" level=info msg="TearDown network for sandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" successfully" Sep 12 17:34:45.708481 containerd[1498]: time="2025-09-12T17:34:45.708366759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:45.710225 containerd[1498]: time="2025-09-12T17:34:45.708535692Z" level=info msg="RemovePodSandbox \"6308c505ce92d3a7d6b2feadf41645bee69468ee51ee3bae6ba38951afb18bfc\" returns successfully" Sep 12 17:34:45.817161 containerd[1498]: time="2025-09-12T17:34:45.814982346Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:45.859582 containerd[1498]: time="2025-09-12T17:34:45.859352583Z" level=info msg="CreateContainer within sandbox \"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.553 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.553 [INFO][5656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" iface="eth0" netns="/var/run/netns/cni-cb0bedf3-7d6c-95ee-0d11-24f2ceb33faa" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.554 [INFO][5656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" iface="eth0" netns="/var/run/netns/cni-cb0bedf3-7d6c-95ee-0d11-24f2ceb33faa" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.564 [INFO][5656] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" after=10.240706ms iface="eth0" netns="/var/run/netns/cni-cb0bedf3-7d6c-95ee-0d11-24f2ceb33faa" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.564 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.564 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.643 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.643 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.662 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.854 [INFO][5690] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.854 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.859 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:45.868300 containerd[1498]: 2025-09-12 17:34:45.864 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:34:45.873209 systemd[1]: run-netns-cni\x2dcb0bedf3\x2d7d6c\x2d95ee\x2d0d11\x2d24f2ceb33faa.mount: Deactivated successfully. Sep 12 17:34:45.877267 containerd[1498]: time="2025-09-12T17:34:45.876259742Z" level=info msg="TearDown network for sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" successfully" Sep 12 17:34:45.877267 containerd[1498]: time="2025-09-12T17:34:45.876282504Z" level=info msg="StopPodSandbox for \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" returns successfully" Sep 12 17:34:45.914770 containerd[1498]: time="2025-09-12T17:34:45.914608001Z" level=info msg="CreateContainer within sandbox \"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf\"" Sep 12 17:34:45.954268 containerd[1498]: time="2025-09-12T17:34:45.953639919Z" level=info msg="StartContainer for \"2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf\"" Sep 12 17:34:45.968465 containerd[1498]: time="2025-09-12T17:34:45.968425112Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.916 [WARNING][5718] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d1853-6f02-4cda-8e7a-51c5e86f7848", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632", Pod:"calico-apiserver-65c8b64669-gr9jn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dda2b395b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.916 [INFO][5718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.916 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.916 [INFO][5718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.916 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.961 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.961 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.961 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.974 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.976 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.979 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.012787 containerd[1498]: 2025-09-12 17:34:45.990 [INFO][5718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.012787 containerd[1498]: time="2025-09-12T17:34:46.012356549Z" level=info msg="TearDown network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" successfully" Sep 12 17:34:46.012787 containerd[1498]: time="2025-09-12T17:34:46.012380773Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" returns successfully" Sep 12 17:34:46.016853 containerd[1498]: time="2025-09-12T17:34:46.013137638Z" level=info msg="RemovePodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:46.016853 containerd[1498]: time="2025-09-12T17:34:46.013180437Z" level=info msg="Forcibly stopping sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\"" Sep 12 17:34:46.076208 containerd[1498]: time="2025-09-12T17:34:46.073663681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcc9d7764-2rq9m,Uid:5ee0dacb-ec72-45e8-b0dc-f56af3f4866c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d\"" Sep 12 17:34:46.078319 systemd-networkd[1386]: cali92f1eb9d56d: Gained IPv6LL Sep 12 17:34:46.102849 containerd[1498]: time="2025-09-12T17:34:46.102672420Z" level=info msg="CreateContainer within sandbox \"63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:34:46.128076 systemd[1]: run-containerd-runc-k8s.io-2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf-runc.H9nnoB.mount: Deactivated successfully. Sep 12 17:34:46.131931 containerd[1498]: time="2025-09-12T17:34:46.131218597Z" level=info msg="CreateContainer within sandbox \"63f380f6f2a316f5dda9398c1390170400a711284c6193faa0e27d0969c1318d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d576549ec4ab38b994c383f4c28e743c72a17b71b5897f38e407d50001ab0928\"" Sep 12 17:34:46.136357 containerd[1498]: time="2025-09-12T17:34:46.135737865Z" level=info msg="StartContainer for \"d576549ec4ab38b994c383f4c28e743c72a17b71b5897f38e407d50001ab0928\"" Sep 12 17:34:46.144446 systemd[1]: Started cri-containerd-2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf.scope - libcontainer container 2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf. Sep 12 17:34:46.207166 kubelet[2549]: I0912 17:34:46.207115 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:34:46.244344 containerd[1498]: time="2025-09-12T17:34:46.242494296Z" level=info msg="StartContainer for \"2a200f4adef951ef2cf0617862d7de6bc506ebdeede3a22f3af943b3569d6aaf\" returns successfully" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.066 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d1853-6f02-4cda-8e7a-51c5e86f7848", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632", Pod:"calico-apiserver-65c8b64669-gr9jn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dda2b395b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.068 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.068 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.068 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.068 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.174 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.174 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.175 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.191 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.192 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.196 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.244344 containerd[1498]: 2025-09-12 17:34:46.210 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.244344 containerd[1498]: time="2025-09-12T17:34:46.244113981Z" level=info msg="TearDown network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" successfully" Sep 12 17:34:46.244344 containerd[1498]: time="2025-09-12T17:34:46.244131783Z" level=info msg="StopPodSandbox for \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" returns successfully" Sep 12 17:34:46.264819 systemd[1]: Started cri-containerd-d576549ec4ab38b994c383f4c28e743c72a17b71b5897f38e407d50001ab0928.scope - libcontainer container d576549ec4ab38b994c383f4c28e743c72a17b71b5897f38e407d50001ab0928. Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.128 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d1853-6f02-4cda-8e7a-51c5e86f7848", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632", Pod:"calico-apiserver-65c8b64669-gr9jn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dda2b395b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.129 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.129 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" iface="eth0" netns="" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.129 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.129 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.233 [INFO][5798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.234 [INFO][5798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.234 [INFO][5798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.259 [WARNING][5798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.259 [INFO][5798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" HandleID="k8s-pod-network.de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.261 [INFO][5798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.272647 containerd[1498]: 2025-09-12 17:34:46.264 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe" Sep 12 17:34:46.273202 containerd[1498]: time="2025-09-12T17:34:46.272004379Z" level=info msg="TearDown network for sandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" successfully" Sep 12 17:34:46.282190 containerd[1498]: time="2025-09-12T17:34:46.280901986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:46.282190 containerd[1498]: time="2025-09-12T17:34:46.281162525Z" level=info msg="RemovePodSandbox \"de83620b894b29aee44d1066c8f31b540f40fedd611587df19f161f665899dbe\" returns successfully" Sep 12 17:34:46.282625 containerd[1498]: time="2025-09-12T17:34:46.282540465Z" level=info msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" Sep 12 17:34:46.385750 kubelet[2549]: I0912 17:34:46.385507 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxt25\" (UniqueName: \"kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25\") pod \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\" (UID: \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\") " Sep 12 17:34:46.385750 kubelet[2549]: I0912 17:34:46.385584 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da5d1853-6f02-4cda-8e7a-51c5e86f7848-calico-apiserver-certs\") pod \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\" (UID: \"da5d1853-6f02-4cda-8e7a-51c5e86f7848\") " Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.354 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6b2d4555-77d4-4589-97dc-8cc68e252176", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff", Pod:"goldmane-7988f88666-gbp7h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0db79aea7fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.355 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.355 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" iface="eth0" netns="" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.355 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.355 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.405 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.408 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.408 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.414 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.415 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.417 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.426715 containerd[1498]: 2025-09-12 17:34:46.421 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.429461 containerd[1498]: time="2025-09-12T17:34:46.427380038Z" level=info msg="TearDown network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" successfully" Sep 12 17:34:46.429461 containerd[1498]: time="2025-09-12T17:34:46.427425932Z" level=info msg="StopPodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" returns successfully" Sep 12 17:34:46.466993 kubelet[2549]: I0912 17:34:46.462666 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25" (OuterVolumeSpecName: "kube-api-access-kxt25") pod "da5d1853-6f02-4cda-8e7a-51c5e86f7848" (UID: "da5d1853-6f02-4cda-8e7a-51c5e86f7848"). InnerVolumeSpecName "kube-api-access-kxt25". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:34:46.468433 kubelet[2549]: I0912 17:34:46.463166 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1853-6f02-4cda-8e7a-51c5e86f7848-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "da5d1853-6f02-4cda-8e7a-51c5e86f7848" (UID: "da5d1853-6f02-4cda-8e7a-51c5e86f7848"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:34:46.470025 containerd[1498]: time="2025-09-12T17:34:46.470000031Z" level=info msg="StartContainer for \"d576549ec4ab38b994c383f4c28e743c72a17b71b5897f38e407d50001ab0928\" returns successfully" Sep 12 17:34:46.471383 containerd[1498]: time="2025-09-12T17:34:46.471295609Z" level=info msg="RemovePodSandbox for \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" Sep 12 17:34:46.471466 containerd[1498]: time="2025-09-12T17:34:46.471453410Z" level=info msg="Forcibly stopping sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\"" Sep 12 17:34:46.493390 kubelet[2549]: I0912 17:34:46.493337 2549 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxt25\" (UniqueName: \"kubernetes.io/projected/da5d1853-6f02-4cda-8e7a-51c5e86f7848-kube-api-access-kxt25\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:46.493390 kubelet[2549]: I0912 17:34:46.493368 2549 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da5d1853-6f02-4cda-8e7a-51c5e86f7848-calico-apiserver-certs\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.613 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6b2d4555-77d4-4589-97dc-8cc68e252176", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"0fde12625e67934ec2be15702c5110de990a745680b90749bbd3a1a76ec934ff", Pod:"goldmane-7988f88666-gbp7h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0db79aea7fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.614 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.614 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" iface="eth0" netns="" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.614 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.614 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.657 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.658 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.658 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.665 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.665 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" HandleID="k8s-pod-network.c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Workload="ci--4081--3--6--c--e429241c3f-k8s-goldmane--7988f88666--gbp7h-eth0" Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.667 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.676393 containerd[1498]: 2025-09-12 17:34:46.671 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397" Sep 12 17:34:46.679285 containerd[1498]: time="2025-09-12T17:34:46.678218182Z" level=info msg="TearDown network for sandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" successfully" Sep 12 17:34:46.683640 containerd[1498]: time="2025-09-12T17:34:46.683608887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:46.683685 containerd[1498]: time="2025-09-12T17:34:46.683669338Z" level=info msg="RemovePodSandbox \"c3b6351ed1f9a9911ce6a020e25f1743811d3706662d4576e3655d086943f397\" returns successfully" Sep 12 17:34:46.684087 containerd[1498]: time="2025-09-12T17:34:46.684071369Z" level=info msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" Sep 12 17:34:46.712602 kubelet[2549]: I0912 17:34:46.712563 2549 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:34:46.716392 kubelet[2549]: I0912 17:34:46.716374 2549 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.728 [WARNING][5915] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.728 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.728 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" iface="eth0" netns="" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.728 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.728 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.769 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.769 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.770 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.776 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.776 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.778 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.784511 containerd[1498]: 2025-09-12 17:34:46.782 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.785468 containerd[1498]: time="2025-09-12T17:34:46.785021308Z" level=info msg="TearDown network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" successfully" Sep 12 17:34:46.785468 containerd[1498]: time="2025-09-12T17:34:46.785048739Z" level=info msg="StopPodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" returns successfully" Sep 12 17:34:46.785963 containerd[1498]: time="2025-09-12T17:34:46.785740723Z" level=info msg="RemovePodSandbox for \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" Sep 12 17:34:46.785963 containerd[1498]: time="2025-09-12T17:34:46.785765589Z" level=info msg="Forcibly stopping sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\"" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.840 [WARNING][5939] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.840 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.840 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" iface="eth0" netns="" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.840 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.840 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.896 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.898 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.898 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.908 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.909 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" HandleID="k8s-pod-network.aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Workload="ci--4081--3--6--c--e429241c3f-k8s-whisker--5c8965db89--kpnzw-eth0" Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.910 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:46.919312 containerd[1498]: 2025-09-12 17:34:46.915 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9" Sep 12 17:34:46.919863 containerd[1498]: time="2025-09-12T17:34:46.919408217Z" level=info msg="TearDown network for sandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" successfully" Sep 12 17:34:46.927906 containerd[1498]: time="2025-09-12T17:34:46.926336464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:46.927906 containerd[1498]: time="2025-09-12T17:34:46.926591234Z" level=info msg="RemovePodSandbox \"aa950acb8ed09892b6400bee614c6190951e7bda4841214c2ef8edf50133a1b9\" returns successfully" Sep 12 17:34:46.932164 containerd[1498]: time="2025-09-12T17:34:46.932037381Z" level=info msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:46.978 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff376f-9ddd-4344-b41b-dd9ac22821d6", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083", Pod:"calico-apiserver-65c8b64669-dswbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia925c21e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:46.978 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:46.979 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" iface="eth0" netns="" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:46.979 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:46.979 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.006 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.006 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.006 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.012 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.012 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.014 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.022330 containerd[1498]: 2025-09-12 17:34:47.019 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.022330 containerd[1498]: time="2025-09-12T17:34:47.022194904Z" level=info msg="TearDown network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" successfully" Sep 12 17:34:47.022330 containerd[1498]: time="2025-09-12T17:34:47.022217956Z" level=info msg="StopPodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" returns successfully" Sep 12 17:34:47.023679 containerd[1498]: time="2025-09-12T17:34:47.023295985Z" level=info msg="RemovePodSandbox for \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" Sep 12 17:34:47.023679 containerd[1498]: time="2025-09-12T17:34:47.023317855Z" level=info msg="Forcibly stopping sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\"" Sep 12 17:34:47.081476 systemd[1]: var-lib-kubelet-pods-da5d1853\x2d6f02\x2d4cda\x2d8e7a\x2d51c5e86f7848-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxt25.mount: Deactivated successfully. Sep 12 17:34:47.081589 systemd[1]: var-lib-kubelet-pods-da5d1853\x2d6f02\x2d4cda\x2d8e7a\x2d51c5e86f7848-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.055 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0", GenerateName:"calico-apiserver-65c8b64669-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff376f-9ddd-4344-b41b-dd9ac22821d6", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c8b64669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083", Pod:"calico-apiserver-65c8b64669-dswbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia925c21e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.055 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.055 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" iface="eth0" netns="" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.055 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.055 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.113 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.113 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.113 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.121 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.121 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" HandleID="k8s-pod-network.fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.126 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.131937 containerd[1498]: 2025-09-12 17:34:47.129 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261" Sep 12 17:34:47.136031 containerd[1498]: time="2025-09-12T17:34:47.132874628Z" level=info msg="TearDown network for sandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" successfully" Sep 12 17:34:47.139877 containerd[1498]: time="2025-09-12T17:34:47.139287146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:47.140303 containerd[1498]: time="2025-09-12T17:34:47.139978721Z" level=info msg="RemovePodSandbox \"fd525bc8b797262e736aaea77c39d696c15a946eeb82d2a018588687c6518261\" returns successfully" Sep 12 17:34:47.140739 containerd[1498]: time="2025-09-12T17:34:47.140498310Z" level=info msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.193 [WARNING][6008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262fff4b-d78b-430c-976d-43c3eb6a4adc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b", Pod:"csi-node-driver-j4tcw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d08c142ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.193 [INFO][6008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.193 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" iface="eth0" netns="" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.193 [INFO][6008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.193 [INFO][6008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.228 [INFO][6015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.228 [INFO][6015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.228 [INFO][6015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.236 [WARNING][6015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.236 [INFO][6015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.238 [INFO][6015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.247209 containerd[1498]: 2025-09-12 17:34:47.242 [INFO][6008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.250391 containerd[1498]: time="2025-09-12T17:34:47.248393088Z" level=info msg="TearDown network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" successfully" Sep 12 17:34:47.250391 containerd[1498]: time="2025-09-12T17:34:47.248420869Z" level=info msg="StopPodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" returns successfully" Sep 12 17:34:47.250391 containerd[1498]: time="2025-09-12T17:34:47.250014638Z" level=info msg="RemovePodSandbox for \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" Sep 12 17:34:47.250391 containerd[1498]: time="2025-09-12T17:34:47.250035608Z" level=info msg="Forcibly stopping sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\"" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.298 [WARNING][6029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262fff4b-d78b-430c-976d-43c3eb6a4adc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"24e970a017807cc9a82e1d5c13b55dd9e331550aa1b7a8be20a59331e9c6040b", Pod:"csi-node-driver-j4tcw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d08c142ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.298 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.298 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" iface="eth0" netns="" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.298 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.298 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.337 [INFO][6036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.338 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.338 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.354 [WARNING][6036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.354 [INFO][6036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" HandleID="k8s-pod-network.c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Workload="ci--4081--3--6--c--e429241c3f-k8s-csi--node--driver--j4tcw-eth0" Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.359 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.368336 containerd[1498]: 2025-09-12 17:34:47.365 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80" Sep 12 17:34:47.368336 containerd[1498]: time="2025-09-12T17:34:47.368273022Z" level=info msg="TearDown network for sandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" successfully" Sep 12 17:34:47.378235 containerd[1498]: time="2025-09-12T17:34:47.378198489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:47.378408 containerd[1498]: time="2025-09-12T17:34:47.378392787Z" level=info msg="RemovePodSandbox \"c4694dc92ba138fd5eac5b62b40e2d09efa5e1c916cae00df4fdeca373398b80\" returns successfully" Sep 12 17:34:47.398221 containerd[1498]: time="2025-09-12T17:34:47.397512563Z" level=info msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" Sep 12 17:34:47.413580 systemd[1]: Removed slice kubepods-besteffort-podda5d1853_6f02_4cda_8e7a_51c5e86f7848.slice - libcontainer container kubepods-besteffort-podda5d1853_6f02_4cda_8e7a_51c5e86f7848.slice. Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.458 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5f203f5-5176-4742-9fe4-7f35ee43dc3d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee", Pod:"coredns-7c65d6cfc9-hhxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d58f1c53a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.458 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.458 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" iface="eth0" netns="" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.458 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.459 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.488 [INFO][6058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.490 [INFO][6058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.490 [INFO][6058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.497 [WARNING][6058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.498 [INFO][6058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.500 [INFO][6058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.507914 containerd[1498]: 2025-09-12 17:34:47.504 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.508734 containerd[1498]: time="2025-09-12T17:34:47.508256986Z" level=info msg="TearDown network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" successfully" Sep 12 17:34:47.508734 containerd[1498]: time="2025-09-12T17:34:47.508280130Z" level=info msg="StopPodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" returns successfully" Sep 12 17:34:47.508983 containerd[1498]: time="2025-09-12T17:34:47.508962978Z" level=info msg="RemovePodSandbox for \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" Sep 12 17:34:47.509023 containerd[1498]: time="2025-09-12T17:34:47.508988837Z" level=info msg="Forcibly stopping sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\"" Sep 12 17:34:47.586487 kubelet[2549]: I0912 17:34:47.563290 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j4tcw" podStartSLOduration=26.67349008 podStartE2EDuration="45.561456302s" podCreationTimestamp="2025-09-12 17:34:02 +0000 UTC" firstStartedPulling="2025-09-12 17:34:26.628726127 +0000 UTC m=+43.459565009" lastFinishedPulling="2025-09-12 17:34:45.51669235 +0000 UTC m=+62.347531231" observedRunningTime="2025-09-12 17:34:47.543928252 +0000 UTC m=+64.374767133" watchObservedRunningTime="2025-09-12 17:34:47.561456302 +0000 UTC m=+64.392295183" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.562 [WARNING][6072] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5f203f5-5176-4742-9fe4-7f35ee43dc3d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"e964fa8e173cf587ef983d10ca48b34c9bdd9e3b151b0540366c54609402d3ee", Pod:"coredns-7c65d6cfc9-hhxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d58f1c53a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.563 [INFO][6072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.563 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" iface="eth0" netns="" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.563 [INFO][6072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.563 [INFO][6072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.599 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.599 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.600 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.613 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.613 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" HandleID="k8s-pod-network.f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Workload="ci--4081--3--6--c--e429241c3f-k8s-coredns--7c65d6cfc9--hhxgk-eth0" Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.619 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.626351 containerd[1498]: 2025-09-12 17:34:47.623 [INFO][6072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6" Sep 12 17:34:47.627622 containerd[1498]: time="2025-09-12T17:34:47.626402562Z" level=info msg="TearDown network for sandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" successfully" Sep 12 17:34:47.632039 containerd[1498]: time="2025-09-12T17:34:47.631028014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:47.632039 containerd[1498]: time="2025-09-12T17:34:47.631323438Z" level=info msg="RemovePodSandbox \"f3334af1e762ea18f335591eb30df7237472d44039af50bfc3f8863c0f3c9eb6\" returns successfully" Sep 12 17:34:47.634611 containerd[1498]: time="2025-09-12T17:34:47.633278605Z" level=info msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.677 [WARNING][6093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"37ecfd1d-6ac6-4ad3-9078-efcb50051c47", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc", Pod:"calico-apiserver-5dcc9d7764-p9sbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b05186054c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.678 [INFO][6093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.678 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" iface="eth0" netns="" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.678 [INFO][6093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.678 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.707 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.708 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.708 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.714 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.714 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.716 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.720633 containerd[1498]: 2025-09-12 17:34:47.717 [INFO][6093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.722744 containerd[1498]: time="2025-09-12T17:34:47.720665576Z" level=info msg="TearDown network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" successfully" Sep 12 17:34:47.722744 containerd[1498]: time="2025-09-12T17:34:47.720696734Z" level=info msg="StopPodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" returns successfully" Sep 12 17:34:47.722744 containerd[1498]: time="2025-09-12T17:34:47.721232422Z" level=info msg="RemovePodSandbox for \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" Sep 12 17:34:47.722744 containerd[1498]: time="2025-09-12T17:34:47.721254603Z" level=info msg="Forcibly stopping sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\"" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.795 [WARNING][6116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0", GenerateName:"calico-apiserver-5dcc9d7764-", Namespace:"calico-apiserver", SelfLink:"", UID:"37ecfd1d-6ac6-4ad3-9078-efcb50051c47", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcc9d7764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-c-e429241c3f", ContainerID:"72103e514efa31b70f06a88177a0aecaeb8945a8ad0595d578a178fc39c923cc", Pod:"calico-apiserver-5dcc9d7764-p9sbl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b05186054c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.795 [INFO][6116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.795 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" iface="eth0" netns="" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.795 [INFO][6116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.795 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.818 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.820 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.820 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.826 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.826 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" HandleID="k8s-pod-network.33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--5dcc9d7764--p9sbl-eth0" Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.828 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:47.835749 containerd[1498]: 2025-09-12 17:34:47.831 [INFO][6116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de" Sep 12 17:34:47.836088 containerd[1498]: time="2025-09-12T17:34:47.835745197Z" level=info msg="TearDown network for sandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" successfully" Sep 12 17:34:47.840604 containerd[1498]: time="2025-09-12T17:34:47.840567612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:34:47.840661 containerd[1498]: time="2025-09-12T17:34:47.840635868Z" level=info msg="RemovePodSandbox \"33d20447d5d93e3fb0303e0352f3bcbaf43181688544494a9d1e49647c3719de\" returns successfully" Sep 12 17:34:49.274854 kubelet[2549]: I0912 17:34:49.274814 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5d1853-6f02-4cda-8e7a-51c5e86f7848" path="/var/lib/kubelet/pods/da5d1853-6f02-4cda-8e7a-51c5e86f7848/volumes" Sep 12 17:34:49.540999 kubelet[2549]: I0912 17:34:49.540877 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcc9d7764-2rq9m" podStartSLOduration=6.540862646 podStartE2EDuration="6.540862646s" podCreationTimestamp="2025-09-12 17:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:47.622456885 +0000 UTC m=+64.453295765" watchObservedRunningTime="2025-09-12 17:34:49.540862646 +0000 UTC m=+66.371701527" Sep 12 17:34:49.631624 containerd[1498]: time="2025-09-12T17:34:49.631577791Z" level=info msg="StopContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" with timeout 30 (s)" Sep 12 17:34:49.652689 containerd[1498]: time="2025-09-12T17:34:49.652649259Z" level=info msg="Stop container \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" with signal terminated" Sep 12 17:34:49.739259 systemd[1]: cri-containerd-8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630.scope: Deactivated successfully. Sep 12 17:34:49.821259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630-rootfs.mount: Deactivated successfully. Sep 12 17:34:49.840220 containerd[1498]: time="2025-09-12T17:34:49.819820875Z" level=info msg="shim disconnected" id=8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630 namespace=k8s.io Sep 12 17:34:49.840335 containerd[1498]: time="2025-09-12T17:34:49.840225522Z" level=warning msg="cleaning up after shim disconnected" id=8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630 namespace=k8s.io Sep 12 17:34:49.840335 containerd[1498]: time="2025-09-12T17:34:49.840257031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:49.895792 containerd[1498]: time="2025-09-12T17:34:49.895738472Z" level=info msg="StopContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" returns successfully" Sep 12 17:34:49.907720 containerd[1498]: time="2025-09-12T17:34:49.907641853Z" level=info msg="StopPodSandbox for \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\"" Sep 12 17:34:49.910731 containerd[1498]: time="2025-09-12T17:34:49.910650001Z" level=info msg="Container to stop \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:34:49.916529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083-shm.mount: Deactivated successfully. Sep 12 17:34:49.926487 systemd[1]: cri-containerd-489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083.scope: Deactivated successfully. Sep 12 17:34:49.966784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083-rootfs.mount: Deactivated successfully. Sep 12 17:34:49.971759 containerd[1498]: time="2025-09-12T17:34:49.970114861Z" level=info msg="shim disconnected" id=489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083 namespace=k8s.io Sep 12 17:34:49.971759 containerd[1498]: time="2025-09-12T17:34:49.970183939Z" level=warning msg="cleaning up after shim disconnected" id=489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083 namespace=k8s.io Sep 12 17:34:49.971759 containerd[1498]: time="2025-09-12T17:34:49.970191773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:49.988580 containerd[1498]: time="2025-09-12T17:34:49.988546302Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:34:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:34:50.047075 systemd-networkd[1386]: calia925c21e3e2: Link DOWN Sep 12 17:34:50.047081 systemd-networkd[1386]: calia925c21e3e2: Lost carrier Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.043 [INFO][6208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.044 [INFO][6208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" iface="eth0" netns="/var/run/netns/cni-b4bd08ec-c519-8480-4fe5-400447d7d985" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.045 [INFO][6208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" iface="eth0" netns="/var/run/netns/cni-b4bd08ec-c519-8480-4fe5-400447d7d985" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.053 [INFO][6208] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" after=9.206542ms iface="eth0" netns="/var/run/netns/cni-b4bd08ec-c519-8480-4fe5-400447d7d985" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.053 [INFO][6208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.053 [INFO][6208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.097 [INFO][6217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.097 [INFO][6217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.098 [INFO][6217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.135 [INFO][6217] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.135 [INFO][6217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.136 [INFO][6217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:34:50.142285 containerd[1498]: 2025-09-12 17:34:50.138 [INFO][6208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:34:50.142285 containerd[1498]: time="2025-09-12T17:34:50.142259028Z" level=info msg="TearDown network for sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" successfully" Sep 12 17:34:50.142285 containerd[1498]: time="2025-09-12T17:34:50.142283222Z" level=info msg="StopPodSandbox for \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" returns successfully" Sep 12 17:34:50.144912 systemd[1]: run-netns-cni\x2db4bd08ec\x2dc519\x2d8480\x2d4fe5\x2d400447d7d985.mount: Deactivated successfully. Sep 12 17:34:50.297258 kubelet[2549]: I0912 17:34:50.297211 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhwlx\" (UniqueName: \"kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx\") pod \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\" (UID: \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\") " Sep 12 17:34:50.297683 kubelet[2549]: I0912 17:34:50.297282 2549 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbff376f-9ddd-4344-b41b-dd9ac22821d6-calico-apiserver-certs\") pod \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\" (UID: \"bbff376f-9ddd-4344-b41b-dd9ac22821d6\") " Sep 12 17:34:50.335668 systemd[1]: var-lib-kubelet-pods-bbff376f\x2d9ddd\x2d4344\x2db41b\x2ddd9ac22821d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhwlx.mount: Deactivated successfully. Sep 12 17:34:50.337034 kubelet[2549]: I0912 17:34:50.336993 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx" (OuterVolumeSpecName: "kube-api-access-rhwlx") pod "bbff376f-9ddd-4344-b41b-dd9ac22821d6" (UID: "bbff376f-9ddd-4344-b41b-dd9ac22821d6"). InnerVolumeSpecName "kube-api-access-rhwlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:34:50.337494 kubelet[2549]: I0912 17:34:50.337443 2549 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbff376f-9ddd-4344-b41b-dd9ac22821d6-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "bbff376f-9ddd-4344-b41b-dd9ac22821d6" (UID: "bbff376f-9ddd-4344-b41b-dd9ac22821d6"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:34:50.398304 kubelet[2549]: I0912 17:34:50.398181 2549 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhwlx\" (UniqueName: \"kubernetes.io/projected/bbff376f-9ddd-4344-b41b-dd9ac22821d6-kube-api-access-rhwlx\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:50.398304 kubelet[2549]: I0912 17:34:50.398215 2549 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbff376f-9ddd-4344-b41b-dd9ac22821d6-calico-apiserver-certs\") on node \"ci-4081-3-6-c-e429241c3f\" DevicePath \"\"" Sep 12 17:34:50.572906 kubelet[2549]: I0912 17:34:50.572726 2549 scope.go:117] "RemoveContainer" containerID="8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630" Sep 12 17:34:50.574599 systemd[1]: Removed slice kubepods-besteffort-podbbff376f_9ddd_4344_b41b_dd9ac22821d6.slice - libcontainer container kubepods-besteffort-podbbff376f_9ddd_4344_b41b_dd9ac22821d6.slice. Sep 12 17:34:50.671218 containerd[1498]: time="2025-09-12T17:34:50.670569351Z" level=info msg="RemoveContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\"" Sep 12 17:34:50.674538 containerd[1498]: time="2025-09-12T17:34:50.674519854Z" level=info msg="RemoveContainer for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" returns successfully" Sep 12 17:34:50.675012 kubelet[2549]: I0912 17:34:50.674903 2549 scope.go:117] "RemoveContainer" containerID="8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630" Sep 12 17:34:50.682168 containerd[1498]: time="2025-09-12T17:34:50.677704342Z" level=error msg="ContainerStatus for \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\": not found" Sep 12 17:34:50.697111 kubelet[2549]: E0912 17:34:50.697022 2549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\": not found" containerID="8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630" Sep 12 17:34:50.698690 kubelet[2549]: I0912 17:34:50.697061 2549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630"} err="failed to get container status \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bbf1d64f4df7934bcfc72e10898b0a9aa59527ecfd2260fa8681cd59da61630\": not found" Sep 12 17:34:50.820676 systemd[1]: var-lib-kubelet-pods-bbff376f\x2d9ddd\x2d4344\x2db41b\x2ddd9ac22821d6-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:34:51.316762 kubelet[2549]: I0912 17:34:51.316699 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbff376f-9ddd-4344-b41b-dd9ac22821d6" path="/var/lib/kubelet/pods/bbff376f-9ddd-4344-b41b-dd9ac22821d6/volumes" Sep 12 17:35:20.610470 systemd[1]: Started sshd@7-95.216.139.29:22-147.75.109.163:41730.service - OpenSSH per-connection server daemon (147.75.109.163:41730). Sep 12 17:35:21.769013 sshd[6336]: Accepted publickey for core from 147.75.109.163 port 41730 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:21.772321 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:21.784700 systemd-logind[1473]: New session 8 of user core. Sep 12 17:35:21.789262 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:22.963396 sshd[6336]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:22.968445 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:22.968944 systemd[1]: sshd@7-95.216.139.29:22-147.75.109.163:41730.service: Deactivated successfully. Sep 12 17:35:22.971987 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:22.973742 systemd-logind[1473]: Removed session 8. Sep 12 17:35:28.114410 systemd[1]: Started sshd@8-95.216.139.29:22-147.75.109.163:41746.service - OpenSSH per-connection server daemon (147.75.109.163:41746). Sep 12 17:35:29.118086 systemd[1]: run-containerd-runc-k8s.io-7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce-runc.WNbKf7.mount: Deactivated successfully. Sep 12 17:35:29.144253 sshd[6372]: Accepted publickey for core from 147.75.109.163 port 41746 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:29.148561 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:29.153822 systemd-logind[1473]: New session 9 of user core. Sep 12 17:35:29.160370 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:30.066493 sshd[6372]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:30.073192 systemd[1]: sshd@8-95.216.139.29:22-147.75.109.163:41746.service: Deactivated successfully. Sep 12 17:35:30.076233 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:30.078291 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:30.080477 systemd-logind[1473]: Removed session 9. Sep 12 17:35:30.236471 systemd[1]: Started sshd@9-95.216.139.29:22-147.75.109.163:45646.service - OpenSSH per-connection server daemon (147.75.109.163:45646). Sep 12 17:35:31.231227 sshd[6406]: Accepted publickey for core from 147.75.109.163 port 45646 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:31.232880 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:31.237929 systemd-logind[1473]: New session 10 of user core. Sep 12 17:35:31.246313 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:35:32.029946 sshd[6406]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:32.034894 systemd[1]: sshd@9-95.216.139.29:22-147.75.109.163:45646.service: Deactivated successfully. Sep 12 17:35:32.038720 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:35:32.040556 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:35:32.042023 systemd-logind[1473]: Removed session 10. Sep 12 17:35:32.229491 systemd[1]: Started sshd@10-95.216.139.29:22-147.75.109.163:45662.service - OpenSSH per-connection server daemon (147.75.109.163:45662). Sep 12 17:35:33.321224 sshd[6438]: Accepted publickey for core from 147.75.109.163 port 45662 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:33.322635 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:33.326796 systemd-logind[1473]: New session 11 of user core. Sep 12 17:35:33.330282 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:35:34.331239 sshd[6438]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:34.336794 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:35:34.337622 systemd[1]: sshd@10-95.216.139.29:22-147.75.109.163:45662.service: Deactivated successfully. Sep 12 17:35:34.341010 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:35:34.342682 systemd-logind[1473]: Removed session 11. Sep 12 17:35:39.487422 systemd[1]: Started sshd@11-95.216.139.29:22-147.75.109.163:45668.service - OpenSSH per-connection server daemon (147.75.109.163:45668). Sep 12 17:35:40.451100 sshd[6455]: Accepted publickey for core from 147.75.109.163 port 45668 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:40.452829 sshd[6455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:40.457311 systemd-logind[1473]: New session 12 of user core. Sep 12 17:35:40.463307 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:35:41.253891 sshd[6455]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:41.257539 systemd[1]: sshd@11-95.216.139.29:22-147.75.109.163:45668.service: Deactivated successfully. Sep 12 17:35:41.259125 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:35:41.260980 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:35:41.262927 systemd-logind[1473]: Removed session 12. Sep 12 17:35:46.428384 systemd[1]: Started sshd@12-95.216.139.29:22-147.75.109.163:44350.service - OpenSSH per-connection server daemon (147.75.109.163:44350). Sep 12 17:35:47.433225 sshd[6497]: Accepted publickey for core from 147.75.109.163 port 44350 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:47.435051 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:47.439691 systemd-logind[1473]: New session 13 of user core. Sep 12 17:35:47.444326 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:35:47.924408 kubelet[2549]: I0912 17:35:47.920287 2549 scope.go:117] "RemoveContainer" containerID="65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4" Sep 12 17:35:47.952231 containerd[1498]: time="2025-09-12T17:35:47.931582345Z" level=info msg="RemoveContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\"" Sep 12 17:35:47.996133 containerd[1498]: time="2025-09-12T17:35:47.996079630Z" level=info msg="RemoveContainer for \"65851467605f149c8a4c9243549af09405f82d60e13da70485569c9c742f51d4\" returns successfully" Sep 12 17:35:48.011708 containerd[1498]: time="2025-09-12T17:35:48.011618692Z" level=info msg="StopPodSandbox for \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\"" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.251 [WARNING][6511] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.252 [INFO][6511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.252 [INFO][6511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" iface="eth0" netns="" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.252 [INFO][6511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.252 [INFO][6511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.417 [INFO][6522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.420 [INFO][6522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.420 [INFO][6522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.431 [WARNING][6522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.431 [INFO][6522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.432 [INFO][6522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.436437 containerd[1498]: 2025-09-12 17:35:48.434 [INFO][6511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.440657 containerd[1498]: time="2025-09-12T17:35:48.440600364Z" level=info msg="TearDown network for sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" successfully" Sep 12 17:35:48.440657 containerd[1498]: time="2025-09-12T17:35:48.440652192Z" level=info msg="StopPodSandbox for \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" returns successfully" Sep 12 17:35:48.458099 containerd[1498]: time="2025-09-12T17:35:48.458055913Z" level=info msg="RemovePodSandbox for \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\"" Sep 12 17:35:48.477494 containerd[1498]: time="2025-09-12T17:35:48.477442734Z" level=info msg="Forcibly stopping sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\"" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.517 [WARNING][6536] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.517 [INFO][6536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.517 [INFO][6536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" iface="eth0" netns="" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.517 [INFO][6536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.517 [INFO][6536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.539 [INFO][6543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.539 [INFO][6543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.539 [INFO][6543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.545 [WARNING][6543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.545 [INFO][6543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" HandleID="k8s-pod-network.4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--gr9jn-eth0" Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.546 [INFO][6543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.552659 containerd[1498]: 2025-09-12 17:35:48.549 [INFO][6536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632" Sep 12 17:35:48.552659 containerd[1498]: time="2025-09-12T17:35:48.552755697Z" level=info msg="TearDown network for sandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" successfully" Sep 12 17:35:48.567815 sshd[6497]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:48.581802 systemd[1]: sshd@12-95.216.139.29:22-147.75.109.163:44350.service: Deactivated successfully. Sep 12 17:35:48.584838 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:35:48.586330 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:35:48.588102 containerd[1498]: time="2025-09-12T17:35:48.587968083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:48.588512 containerd[1498]: time="2025-09-12T17:35:48.588129777Z" level=info msg="RemovePodSandbox \"4f8a01d2d90804fd54fd693f7c976b017a08400525879f9c7dc7f50dcca54632\" returns successfully" Sep 12 17:35:48.588854 systemd-logind[1473]: Removed session 13. Sep 12 17:35:48.590080 containerd[1498]: time="2025-09-12T17:35:48.589997901Z" level=info msg="StopPodSandbox for \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\"" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.623 [WARNING][6559] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.624 [INFO][6559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.624 [INFO][6559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" iface="eth0" netns="" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.624 [INFO][6559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.624 [INFO][6559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.643 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.644 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.644 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.649 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.649 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.650 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.655941 containerd[1498]: 2025-09-12 17:35:48.653 [INFO][6559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.656438 containerd[1498]: time="2025-09-12T17:35:48.655986454Z" level=info msg="TearDown network for sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" successfully" Sep 12 17:35:48.656438 containerd[1498]: time="2025-09-12T17:35:48.656017943Z" level=info msg="StopPodSandbox for \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" returns successfully" Sep 12 17:35:48.656982 containerd[1498]: time="2025-09-12T17:35:48.656931887Z" level=info msg="RemovePodSandbox for \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\"" Sep 12 17:35:48.656982 containerd[1498]: time="2025-09-12T17:35:48.656969689Z" level=info msg="Forcibly stopping sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\"" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.687 [WARNING][6581] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" WorkloadEndpoint="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.687 [INFO][6581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.687 [INFO][6581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" iface="eth0" netns="" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.688 [INFO][6581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.688 [INFO][6581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.708 [INFO][6589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.708 [INFO][6589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.709 [INFO][6589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.714 [WARNING][6589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.714 [INFO][6589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" HandleID="k8s-pod-network.489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Workload="ci--4081--3--6--c--e429241c3f-k8s-calico--apiserver--65c8b64669--dswbl-eth0" Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.715 [INFO][6589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.722681 containerd[1498]: 2025-09-12 17:35:48.718 [INFO][6581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083" Sep 12 17:35:48.722681 containerd[1498]: time="2025-09-12T17:35:48.722217424Z" level=info msg="TearDown network for sandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" successfully" Sep 12 17:35:48.815555 containerd[1498]: time="2025-09-12T17:35:48.815502038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:48.815721 containerd[1498]: time="2025-09-12T17:35:48.815573573Z" level=info msg="RemovePodSandbox \"489ce372d9c6ad80c6a97023bcf9133dc6c036d57c8f02c1ce2440e8ca6f4083\" returns successfully" Sep 12 17:35:53.764633 systemd[1]: Started sshd@13-95.216.139.29:22-147.75.109.163:60110.service - OpenSSH per-connection server daemon (147.75.109.163:60110). Sep 12 17:35:54.898955 sshd[6612]: Accepted publickey for core from 147.75.109.163 port 60110 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:54.901433 sshd[6612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:54.909517 systemd-logind[1473]: New session 14 of user core. Sep 12 17:35:54.916341 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:35:56.340941 sshd[6612]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:56.346339 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:35:56.348176 systemd[1]: sshd@13-95.216.139.29:22-147.75.109.163:60110.service: Deactivated successfully. Sep 12 17:35:56.351068 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:35:56.353467 systemd-logind[1473]: Removed session 14. Sep 12 17:35:56.496759 systemd[1]: Started sshd@14-95.216.139.29:22-147.75.109.163:60124.service - OpenSSH per-connection server daemon (147.75.109.163:60124). Sep 12 17:35:57.478174 sshd[6647]: Accepted publickey for core from 147.75.109.163 port 60124 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:57.480383 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:57.487288 systemd-logind[1473]: New session 15 of user core. Sep 12 17:35:57.493440 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:35:58.424710 sshd[6647]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:58.429236 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:35:58.429670 systemd[1]: sshd@14-95.216.139.29:22-147.75.109.163:60124.service: Deactivated successfully. Sep 12 17:35:58.431843 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:35:58.433140 systemd-logind[1473]: Removed session 15. Sep 12 17:35:58.594427 systemd[1]: Started sshd@15-95.216.139.29:22-147.75.109.163:60136.service - OpenSSH per-connection server daemon (147.75.109.163:60136). Sep 12 17:35:59.579257 sshd[6666]: Accepted publickey for core from 147.75.109.163 port 60136 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:35:59.586846 sshd[6666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:59.593721 systemd-logind[1473]: New session 16 of user core. Sep 12 17:35:59.595365 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:02.074885 sshd[6666]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:02.085850 systemd[1]: sshd@15-95.216.139.29:22-147.75.109.163:60136.service: Deactivated successfully. Sep 12 17:36:02.088387 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:02.089865 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:02.090692 systemd-logind[1473]: Removed session 16. Sep 12 17:36:02.291458 systemd[1]: Started sshd@16-95.216.139.29:22-147.75.109.163:53440.service - OpenSSH per-connection server daemon (147.75.109.163:53440). Sep 12 17:36:03.443490 sshd[6728]: Accepted publickey for core from 147.75.109.163 port 53440 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:03.445358 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:03.452000 systemd-logind[1473]: New session 17 of user core. Sep 12 17:36:03.456289 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:03.579089 systemd[1]: run-containerd-runc-k8s.io-7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce-runc.PMCNUt.mount: Deactivated successfully. Sep 12 17:36:04.847304 sshd[6728]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:04.850120 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:04.850790 systemd[1]: sshd@16-95.216.139.29:22-147.75.109.163:53440.service: Deactivated successfully. Sep 12 17:36:04.853520 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:04.855515 systemd-logind[1473]: Removed session 17. Sep 12 17:36:05.034416 systemd[1]: Started sshd@17-95.216.139.29:22-147.75.109.163:53452.service - OpenSSH per-connection server daemon (147.75.109.163:53452). Sep 12 17:36:06.140189 sshd[6757]: Accepted publickey for core from 147.75.109.163 port 53452 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:06.141607 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:06.147923 systemd-logind[1473]: New session 18 of user core. Sep 12 17:36:06.153371 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:07.046726 sshd[6757]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:07.054645 systemd[1]: sshd@17-95.216.139.29:22-147.75.109.163:53452.service: Deactivated successfully. Sep 12 17:36:07.056352 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:07.057505 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:07.058523 systemd-logind[1473]: Removed session 18. Sep 12 17:36:12.228094 systemd[1]: Started sshd@18-95.216.139.29:22-147.75.109.163:58280.service - OpenSSH per-connection server daemon (147.75.109.163:58280). Sep 12 17:36:13.368534 sshd[6775]: Accepted publickey for core from 147.75.109.163 port 58280 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:13.370954 sshd[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:13.375450 systemd-logind[1473]: New session 19 of user core. Sep 12 17:36:13.380290 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:14.206734 sshd[6775]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:14.210109 systemd[1]: sshd@18-95.216.139.29:22-147.75.109.163:58280.service: Deactivated successfully. Sep 12 17:36:14.213000 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:14.214180 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:14.215363 systemd-logind[1473]: Removed session 19. Sep 12 17:36:19.358467 systemd[1]: Started sshd@19-95.216.139.29:22-147.75.109.163:58286.service - OpenSSH per-connection server daemon (147.75.109.163:58286). Sep 12 17:36:20.347527 sshd[6787]: Accepted publickey for core from 147.75.109.163 port 58286 ssh2: RSA SHA256:0lLQDI1exHD7h3/xNDVcYYaT5o4Ke3hnBgY7BErDCBA Sep 12 17:36:20.349539 sshd[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:20.355058 systemd-logind[1473]: New session 20 of user core. Sep 12 17:36:20.359395 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:36:21.133231 sshd[6787]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:21.139802 systemd[1]: sshd@19-95.216.139.29:22-147.75.109.163:58286.service: Deactivated successfully. Sep 12 17:36:21.141495 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:36:21.142823 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:36:21.143997 systemd-logind[1473]: Removed session 20. Sep 12 17:36:29.133317 systemd[1]: run-containerd-runc-k8s.io-7d8ddecf546cbdeb3c317085e599bb6cbe64e4b649d8c9849db667df19f023ce-runc.uDorNc.mount: Deactivated successfully. Sep 12 17:36:36.867985 systemd[1]: cri-containerd-807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c.scope: Deactivated successfully. Sep 12 17:36:36.869265 systemd[1]: cri-containerd-807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c.scope: Consumed 3.376s CPU time, 20.7M memory peak, 0B memory swap peak. Sep 12 17:36:36.919243 systemd[1]: cri-containerd-346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad.scope: Deactivated successfully. Sep 12 17:36:36.919551 systemd[1]: cri-containerd-346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad.scope: Consumed 14.567s CPU time. Sep 12 17:36:37.071759 kubelet[2549]: E0912 17:36:37.055252 2549 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-c-e429241c3f)" Sep 12 17:36:37.136139 kubelet[2549]: E0912 17:36:37.135798 2549 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35606->10.0.0.2:2379: read: connection timed out" Sep 12 17:36:37.182016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c-rootfs.mount: Deactivated successfully. Sep 12 17:36:37.207001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad-rootfs.mount: Deactivated successfully. Sep 12 17:36:37.214328 containerd[1498]: time="2025-09-12T17:36:37.198530577Z" level=info msg="shim disconnected" id=807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c namespace=k8s.io Sep 12 17:36:37.217457 containerd[1498]: time="2025-09-12T17:36:37.214332414Z" level=warning msg="cleaning up after shim disconnected" id=807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c namespace=k8s.io Sep 12 17:36:37.217457 containerd[1498]: time="2025-09-12T17:36:37.214348154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:37.217457 containerd[1498]: time="2025-09-12T17:36:37.203909303Z" level=info msg="shim disconnected" id=346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad namespace=k8s.io Sep 12 17:36:37.217457 containerd[1498]: time="2025-09-12T17:36:37.214418757Z" level=warning msg="cleaning up after shim disconnected" id=346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad namespace=k8s.io Sep 12 17:36:37.217457 containerd[1498]: time="2025-09-12T17:36:37.214436440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:37.291572 containerd[1498]: time="2025-09-12T17:36:37.291531525Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:36:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:36:38.219976 kubelet[2549]: I0912 17:36:38.213356 2549 scope.go:117] "RemoveContainer" containerID="346496a34ccee722e5856893d33a239d7bf9f5b5fb75a01314b37e3bc1d3ecad" Sep 12 17:36:38.233294 kubelet[2549]: I0912 17:36:38.233103 2549 scope.go:117] "RemoveContainer" containerID="807f18d8cd8f8e9ebe537c777df890ffc57d83742377883fa5d91410ada5451c" Sep 12 17:36:38.289879 containerd[1498]: time="2025-09-12T17:36:38.289822448Z" level=info msg="CreateContainer within sandbox \"591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:36:38.290343 containerd[1498]: time="2025-09-12T17:36:38.289839871Z" level=info msg="CreateContainer within sandbox \"df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:36:38.363204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269648618.mount: Deactivated successfully. Sep 12 17:36:38.366687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785994093.mount: Deactivated successfully. Sep 12 17:36:38.373883 containerd[1498]: time="2025-09-12T17:36:38.373550706Z" level=info msg="CreateContainer within sandbox \"591fe1d47f6ccead07d413a64d400069862e56f6a3892be978ef150afe1367e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7bf85355f3a2ac8aa54e7c4304593482072d76218c6de7377a1cc01afa8ac5c7\"" Sep 12 17:36:38.374064 containerd[1498]: time="2025-09-12T17:36:38.374044622Z" level=info msg="StartContainer for \"7bf85355f3a2ac8aa54e7c4304593482072d76218c6de7377a1cc01afa8ac5c7\"" Sep 12 17:36:38.374760 containerd[1498]: time="2025-09-12T17:36:38.374666919Z" level=info msg="CreateContainer within sandbox \"df960eda949a64b9aba8b336adfacc2e5e28bfb6bd3efbccd9d6e7f33a6485ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"98cbe16469aa2ed607496f518387a790f9ae71b21512bcd8e46078136ab49c52\"" Sep 12 17:36:38.375488 containerd[1498]: time="2025-09-12T17:36:38.374994027Z" level=info msg="StartContainer for \"98cbe16469aa2ed607496f518387a790f9ae71b21512bcd8e46078136ab49c52\"" Sep 12 17:36:38.413277 systemd[1]: Started cri-containerd-98cbe16469aa2ed607496f518387a790f9ae71b21512bcd8e46078136ab49c52.scope - libcontainer container 98cbe16469aa2ed607496f518387a790f9ae71b21512bcd8e46078136ab49c52. Sep 12 17:36:38.415571 systemd[1]: Started cri-containerd-7bf85355f3a2ac8aa54e7c4304593482072d76218c6de7377a1cc01afa8ac5c7.scope - libcontainer container 7bf85355f3a2ac8aa54e7c4304593482072d76218c6de7377a1cc01afa8ac5c7. Sep 12 17:36:38.459712 containerd[1498]: time="2025-09-12T17:36:38.459573516Z" level=info msg="StartContainer for \"98cbe16469aa2ed607496f518387a790f9ae71b21512bcd8e46078136ab49c52\" returns successfully" Sep 12 17:36:38.478909 containerd[1498]: time="2025-09-12T17:36:38.478195961Z" level=info msg="StartContainer for \"7bf85355f3a2ac8aa54e7c4304593482072d76218c6de7377a1cc01afa8ac5c7\" returns successfully" Sep 12 17:36:41.654804 kubelet[2549]: E0912 17:36:41.599239 2549 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35420->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-c-e429241c3f.1864998e5068d9a9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-c-e429241c3f,UID:198327b94ac0feebaa62e74169e796db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-c-e429241c3f,},FirstTimestamp:2025-09-12 17:36:31.094790569 +0000 UTC m=+167.925629480,LastTimestamp:2025-09-12 17:36:31.094790569 +0000 UTC m=+167.925629480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-c-e429241c3f,}"