Sep 9 04:00:37.097577 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 04:00:37.097613 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 04:00:37.097627 kernel: BIOS-provided physical RAM map: Sep 9 04:00:37.097644 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 04:00:37.097654 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 04:00:37.097665 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 04:00:37.097677 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 9 04:00:37.097687 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 9 04:00:37.097698 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 04:00:37.097709 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 04:00:37.097719 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 04:00:37.097730 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 04:00:37.097754 kernel: NX (Execute Disable) protection: active Sep 9 04:00:37.097766 kernel: APIC: Static calls initialized Sep 9 04:00:37.097779 kernel: SMBIOS 2.8 present. Sep 9 04:00:37.097797 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 9 04:00:37.097810 kernel: Hypervisor detected: KVM Sep 9 04:00:37.097827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 04:00:37.097839 kernel: kvm-clock: using sched offset of 6332753199 cycles Sep 9 04:00:37.097852 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 04:00:37.097864 kernel: tsc: Detected 2499.998 MHz processor Sep 9 04:00:37.097876 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 04:00:37.097888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 04:00:37.097900 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 9 04:00:37.098105 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 04:00:37.098123 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 04:00:37.098143 kernel: Using GB pages for direct mapping Sep 9 04:00:37.098155 kernel: ACPI: Early table checksum verification disabled Sep 9 04:00:37.098167 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 9 04:00:37.098179 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098191 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098203 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098215 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 9 04:00:37.098227 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098239 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098256 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098268 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:00:37.098280 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 9 04:00:37.098292 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 9 04:00:37.098304 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 9 04:00:37.098322 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 9 04:00:37.098335 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 9 04:00:37.098353 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 9 04:00:37.098365 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 9 04:00:37.098378 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 9 04:00:37.098398 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 9 04:00:37.098411 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 9 04:00:37.098424 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 9 04:00:37.098436 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 9 04:00:37.098453 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 9 04:00:37.098466 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 9 04:00:37.098478 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 9 04:00:37.098490 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 9 04:00:37.098503 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 9 04:00:37.098515 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 9 04:00:37.098527 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 9 04:00:37.098539 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 9 04:00:37.098551 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 9 04:00:37.098569 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 9 04:00:37.098588 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 9 04:00:37.098601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 04:00:37.098613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 04:00:37.098626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 9 04:00:37.098638 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 9 04:00:37.098651 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 9 04:00:37.098663 kernel: Zone ranges: Sep 9 04:00:37.098676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 04:00:37.098688 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 9 04:00:37.098705 kernel: Normal empty Sep 9 04:00:37.098718 kernel: Movable zone start for each node Sep 9 04:00:37.098731 kernel: Early memory node ranges Sep 9 04:00:37.098743 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 04:00:37.098755 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 9 04:00:37.098768 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 9 04:00:37.098780 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 04:00:37.098792 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 04:00:37.098812 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 9 04:00:37.098826 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 04:00:37.098844 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 04:00:37.098857 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 04:00:37.098869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 04:00:37.098882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 04:00:37.098894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 04:00:37.098907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 04:00:37.098931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 04:00:37.098944 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 04:00:37.098981 kernel: TSC deadline timer available Sep 9 04:00:37.099003 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 9 04:00:37.099016 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 04:00:37.099029 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 04:00:37.099041 kernel: Booting paravirtualized kernel on KVM Sep 9 04:00:37.099054 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 04:00:37.099066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 9 04:00:37.099079 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 9 04:00:37.099092 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 9 04:00:37.099104 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 9 04:00:37.099121 kernel: kvm-guest: PV spinlocks enabled Sep 9 04:00:37.099134 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 04:00:37.099148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 04:00:37.099161 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 04:00:37.099174 kernel: random: crng init done Sep 9 04:00:37.099186 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 04:00:37.099199 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 04:00:37.099216 kernel: Fallback order for Node 0: 0 Sep 9 04:00:37.099229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 9 04:00:37.099247 kernel: Policy zone: DMA32 Sep 9 04:00:37.099261 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 04:00:37.099274 kernel: software IO TLB: area num 16. Sep 9 04:00:37.099287 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 194824K reserved, 0K cma-reserved) Sep 9 04:00:37.099300 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 9 04:00:37.099312 kernel: Kernel/User page tables isolation: enabled Sep 9 04:00:37.099325 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 04:00:37.099343 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 04:00:37.099356 kernel: Dynamic Preempt: voluntary Sep 9 04:00:37.099368 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 04:00:37.099381 kernel: rcu: RCU event tracing is enabled. Sep 9 04:00:37.099394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 9 04:00:37.099412 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 04:00:37.099433 kernel: Rude variant of Tasks RCU enabled. Sep 9 04:00:37.099451 kernel: Tracing variant of Tasks RCU enabled. Sep 9 04:00:37.099464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 04:00:37.099478 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 9 04:00:37.099491 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 9 04:00:37.099504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 04:00:37.099521 kernel: Console: colour VGA+ 80x25 Sep 9 04:00:37.099535 kernel: printk: console [tty0] enabled Sep 9 04:00:37.099548 kernel: printk: console [ttyS0] enabled Sep 9 04:00:37.099561 kernel: ACPI: Core revision 20230628 Sep 9 04:00:37.099574 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 04:00:37.099591 kernel: x2apic enabled Sep 9 04:00:37.099605 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 04:00:37.099624 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 9 04:00:37.099638 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 9 04:00:37.099652 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 04:00:37.099665 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 04:00:37.099678 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 04:00:37.099691 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 04:00:37.099704 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 04:00:37.099717 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 04:00:37.099735 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 04:00:37.099749 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 04:00:37.099761 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 04:00:37.099774 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 04:00:37.099787 kernel: MMIO Stale Data: Unknown: No mitigations Sep 9 04:00:37.099800 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 04:00:37.099813 kernel: active return thunk: its_return_thunk Sep 9 04:00:37.099825 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 04:00:37.099839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 04:00:37.099852 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 04:00:37.099865 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 04:00:37.099882 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 04:00:37.099896 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 04:00:37.099921 kernel: Freeing SMP alternatives memory: 32K Sep 9 04:00:37.099936 kernel: pid_max: default: 32768 minimum: 301 Sep 9 04:00:37.099949 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 04:00:37.099996 kernel: landlock: Up and running. Sep 9 04:00:37.100010 kernel: SELinux: Initializing. Sep 9 04:00:37.100023 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 04:00:37.100036 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 04:00:37.100049 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 9 04:00:37.100062 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 04:00:37.100082 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 04:00:37.100096 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 04:00:37.100109 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 9 04:00:37.100122 kernel: signal: max sigframe size: 1776 Sep 9 04:00:37.100135 kernel: rcu: Hierarchical SRCU implementation. Sep 9 04:00:37.100149 kernel: rcu: Max phase no-delay instances is 400. Sep 9 04:00:37.100162 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 04:00:37.100175 kernel: smp: Bringing up secondary CPUs ... Sep 9 04:00:37.100188 kernel: smpboot: x86: Booting SMP configuration: Sep 9 04:00:37.100206 kernel: .... node #0, CPUs: #1 Sep 9 04:00:37.100219 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 9 04:00:37.100232 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 04:00:37.100245 kernel: smpboot: Max logical packages: 16 Sep 9 04:00:37.100258 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 9 04:00:37.100271 kernel: devtmpfs: initialized Sep 9 04:00:37.100284 kernel: x86/mm: Memory block size: 128MB Sep 9 04:00:37.100297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 04:00:37.100311 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 9 04:00:37.100328 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 04:00:37.100342 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 04:00:37.100355 kernel: audit: initializing netlink subsys (disabled) Sep 9 04:00:37.100368 kernel: audit: type=2000 audit(1757390435.876:1): state=initialized audit_enabled=0 res=1 Sep 9 04:00:37.100381 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 04:00:37.100394 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 04:00:37.100407 kernel: cpuidle: using governor menu Sep 9 04:00:37.100420 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 04:00:37.100433 kernel: dca service started, version 1.12.1 Sep 9 04:00:37.100451 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 04:00:37.100465 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 04:00:37.100478 kernel: PCI: Using configuration type 1 for base access Sep 9 04:00:37.100491 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 04:00:37.100504 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 04:00:37.100517 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 04:00:37.100530 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 04:00:37.100543 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 04:00:37.100556 kernel: ACPI: Added _OSI(Module Device) Sep 9 04:00:37.100574 kernel: ACPI: Added _OSI(Processor Device) Sep 9 04:00:37.100588 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 04:00:37.100601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 04:00:37.100614 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 04:00:37.100627 kernel: ACPI: Interpreter enabled Sep 9 04:00:37.100640 kernel: ACPI: PM: (supports S0 S5) Sep 9 04:00:37.100653 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 04:00:37.100666 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 04:00:37.100679 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 04:00:37.100697 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 04:00:37.100710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 04:00:37.103057 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 04:00:37.103269 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 04:00:37.103478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 04:00:37.103499 kernel: PCI host bridge to bus 0000:00 Sep 9 04:00:37.103702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 04:00:37.103885 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 04:00:37.104105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 04:00:37.104273 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 9 04:00:37.104439 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 04:00:37.104604 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 9 04:00:37.104770 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 04:00:37.107737 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 04:00:37.108007 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 9 04:00:37.108196 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 9 04:00:37.108380 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 9 04:00:37.108561 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 9 04:00:37.108741 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 04:00:37.108986 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.109184 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 9 04:00:37.109395 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.109577 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 9 04:00:37.109788 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.112036 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 9 04:00:37.112273 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.112476 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 9 04:00:37.112677 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.112869 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 9 04:00:37.114415 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.114617 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 9 04:00:37.114836 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.115106 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 9 04:00:37.115322 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 9 04:00:37.115508 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 9 04:00:37.115713 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 9 04:00:37.115895 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 9 04:00:37.116109 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 9 04:00:37.116290 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 9 04:00:37.116532 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 9 04:00:37.116726 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 9 04:00:37.116907 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 9 04:00:37.117129 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 9 04:00:37.117312 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 9 04:00:37.117507 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 04:00:37.117690 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 04:00:37.117905 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 04:00:37.118128 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 9 04:00:37.118315 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 9 04:00:37.118510 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 04:00:37.118692 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 9 04:00:37.118950 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 9 04:00:37.119174 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 9 04:00:37.119363 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 04:00:37.119545 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 04:00:37.119727 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 04:00:37.119981 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 04:00:37.120203 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 9 04:00:37.120412 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 9 04:00:37.120600 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 04:00:37.120791 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 04:00:37.121029 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 9 04:00:37.121220 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 9 04:00:37.121406 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 04:00:37.121589 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 04:00:37.121781 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 04:00:37.125190 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 9 04:00:37.125413 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 9 04:00:37.125615 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 04:00:37.125823 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 04:00:37.126069 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 04:00:37.126255 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 04:00:37.126434 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 04:00:37.126624 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 04:00:37.126806 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 04:00:37.127365 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 04:00:37.127554 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 04:00:37.127739 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 04:00:37.127931 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 04:00:37.128177 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 04:00:37.129228 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 04:00:37.129427 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 04:00:37.129612 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 04:00:37.129800 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 04:00:37.130016 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 04:00:37.132212 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 04:00:37.132237 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 04:00:37.132252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 04:00:37.132265 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 04:00:37.132288 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 04:00:37.132302 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 04:00:37.132315 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 04:00:37.132329 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 04:00:37.132343 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 04:00:37.132356 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 04:00:37.132370 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 04:00:37.132383 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 04:00:37.132397 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 04:00:37.132416 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 04:00:37.132430 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 04:00:37.132443 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 04:00:37.132457 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 04:00:37.132470 kernel: iommu: Default domain type: Translated Sep 9 04:00:37.132483 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 04:00:37.132497 kernel: PCI: Using ACPI for IRQ routing Sep 9 04:00:37.132510 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 04:00:37.132524 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 04:00:37.132542 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 9 04:00:37.132723 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 04:00:37.132904 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 04:00:37.133166 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 04:00:37.133188 kernel: vgaarb: loaded Sep 9 04:00:37.133202 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 04:00:37.133215 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 04:00:37.133229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 04:00:37.133242 kernel: pnp: PnP ACPI init Sep 9 04:00:37.133458 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 04:00:37.133481 kernel: pnp: PnP ACPI: found 5 devices Sep 9 04:00:37.133496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 04:00:37.133509 kernel: NET: Registered PF_INET protocol family Sep 9 04:00:37.133523 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 04:00:37.133536 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 04:00:37.133550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 04:00:37.133563 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 04:00:37.133584 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 04:00:37.133597 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 04:00:37.133611 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 04:00:37.133624 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 04:00:37.133637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 04:00:37.133651 kernel: NET: Registered PF_XDP protocol family Sep 9 04:00:37.133828 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 9 04:00:37.135099 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 9 04:00:37.135304 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 9 04:00:37.135490 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 9 04:00:37.135674 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 04:00:37.135860 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 04:00:37.137136 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 04:00:37.138136 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 04:00:37.138340 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 9 04:00:37.138525 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 9 04:00:37.138709 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 9 04:00:37.138891 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 9 04:00:37.140139 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 9 04:00:37.140331 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 9 04:00:37.140515 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 9 04:00:37.140708 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 9 04:00:37.140939 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 04:00:37.144350 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 04:00:37.144541 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 04:00:37.144726 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 9 04:00:37.144920 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 04:00:37.145140 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 04:00:37.145325 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 04:00:37.145505 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 9 04:00:37.145736 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 04:00:37.145930 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 04:00:37.146133 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 04:00:37.146314 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 9 04:00:37.146496 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 04:00:37.146688 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 04:00:37.146878 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 04:00:37.148179 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 9 04:00:37.148369 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 04:00:37.148551 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 04:00:37.148733 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 04:00:37.148929 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 9 04:00:37.149172 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 04:00:37.149358 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 04:00:37.149543 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 04:00:37.149736 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 9 04:00:37.149930 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 04:00:37.152230 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 04:00:37.152428 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 04:00:37.152614 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 9 04:00:37.152808 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 04:00:37.153025 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 04:00:37.153212 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 04:00:37.153393 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 9 04:00:37.153574 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 04:00:37.153799 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 04:00:37.155248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 04:00:37.155429 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 04:00:37.155596 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 04:00:37.155775 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 9 04:00:37.155974 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 04:00:37.156145 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 9 04:00:37.156334 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 9 04:00:37.156514 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 9 04:00:37.156689 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 04:00:37.156876 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 9 04:00:37.157159 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 9 04:00:37.157333 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 9 04:00:37.157503 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 04:00:37.157711 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 9 04:00:37.157884 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 9 04:00:37.158130 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 04:00:37.158344 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 9 04:00:37.158518 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 9 04:00:37.158687 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 04:00:37.158923 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 9 04:00:37.159116 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 9 04:00:37.159287 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 04:00:37.159518 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 9 04:00:37.159705 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 9 04:00:37.159886 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 04:00:37.160109 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 9 04:00:37.160286 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 9 04:00:37.160458 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 04:00:37.160644 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 9 04:00:37.160818 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 9 04:00:37.161029 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 04:00:37.161052 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 04:00:37.161067 kernel: PCI: CLS 0 bytes, default 64 Sep 9 04:00:37.161082 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 04:00:37.161096 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 9 04:00:37.161110 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 04:00:37.161125 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 9 04:00:37.161139 kernel: Initialise system trusted keyrings Sep 9 04:00:37.161161 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 04:00:37.161176 kernel: Key type asymmetric registered Sep 9 04:00:37.161189 kernel: Asymmetric key parser 'x509' registered Sep 9 04:00:37.161203 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 04:00:37.161218 kernel: io scheduler mq-deadline registered Sep 9 04:00:37.161232 kernel: io scheduler kyber registered Sep 9 04:00:37.161246 kernel: io scheduler bfq registered Sep 9 04:00:37.161434 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 9 04:00:37.161626 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 9 04:00:37.161823 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.162040 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 9 04:00:37.162270 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 9 04:00:37.162456 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.162652 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 9 04:00:37.162836 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 9 04:00:37.163083 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.163268 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 9 04:00:37.163447 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 9 04:00:37.163626 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.163808 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 9 04:00:37.164019 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 9 04:00:37.164213 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.164399 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 9 04:00:37.164582 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 9 04:00:37.164764 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.164975 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 9 04:00:37.165163 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 9 04:00:37.165356 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.165544 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 9 04:00:37.165730 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 9 04:00:37.165925 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:00:37.165998 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 04:00:37.166015 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 04:00:37.166037 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 04:00:37.166051 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 04:00:37.166066 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 04:00:37.166080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 04:00:37.166095 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 04:00:37.166109 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 04:00:37.166123 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 04:00:37.166327 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 04:00:37.166506 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 04:00:37.166688 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T04:00:36 UTC (1757390436) Sep 9 04:00:37.166860 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 04:00:37.166881 kernel: intel_pstate: CPU model not supported Sep 9 04:00:37.166902 kernel: NET: Registered PF_INET6 protocol family Sep 9 04:00:37.166930 kernel: Segment Routing with IPv6 Sep 9 04:00:37.166944 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 04:00:37.167081 kernel: NET: Registered PF_PACKET protocol family Sep 9 04:00:37.167098 kernel: Key type dns_resolver registered Sep 9 04:00:37.167119 kernel: IPI shorthand broadcast: enabled Sep 9 04:00:37.167134 kernel: sched_clock: Marking stable (1516004474, 237092615)->(2025943400, -272846311) Sep 9 04:00:37.167148 kernel: registered taskstats version 1 Sep 9 04:00:37.167162 kernel: Loading compiled-in X.509 certificates Sep 9 04:00:37.167177 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 04:00:37.167191 kernel: Key type .fscrypt registered Sep 9 04:00:37.167205 kernel: Key type fscrypt-provisioning registered Sep 9 04:00:37.167219 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 04:00:37.167233 kernel: ima: Allocated hash algorithm: sha1 Sep 9 04:00:37.167253 kernel: ima: No architecture policies found Sep 9 04:00:37.167266 kernel: clk: Disabling unused clocks Sep 9 04:00:37.167281 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 04:00:37.167295 kernel: Write protecting the kernel read-only data: 36864k Sep 9 04:00:37.167309 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 04:00:37.167323 kernel: Run /init as init process Sep 9 04:00:37.167337 kernel: with arguments: Sep 9 04:00:37.167351 kernel: /init Sep 9 04:00:37.167364 kernel: with environment: Sep 9 04:00:37.167383 kernel: HOME=/ Sep 9 04:00:37.167397 kernel: TERM=linux Sep 9 04:00:37.167411 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 04:00:37.167429 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 04:00:37.167446 systemd[1]: Detected virtualization kvm. Sep 9 04:00:37.167461 systemd[1]: Detected architecture x86-64. Sep 9 04:00:37.167476 systemd[1]: Running in initrd. Sep 9 04:00:37.167490 systemd[1]: No hostname configured, using default hostname. Sep 9 04:00:37.167510 systemd[1]: Hostname set to . Sep 9 04:00:37.167526 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:00:37.167540 systemd[1]: Queued start job for default target initrd.target. Sep 9 04:00:37.167555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:00:37.167570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:00:37.167586 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 04:00:37.167601 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:00:37.167621 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 04:00:37.167637 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 04:00:37.167654 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 04:00:37.167669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 04:00:37.167684 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:00:37.167699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:00:37.167714 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:00:37.167735 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:00:37.167750 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:00:37.167765 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:00:37.167780 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:00:37.167795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:00:37.167810 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 04:00:37.167826 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 04:00:37.167841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:00:37.167856 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:00:37.167876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:00:37.167891 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:00:37.167906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 04:00:37.167933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:00:37.167948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 04:00:37.167976 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 04:00:37.167992 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:00:37.168007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:00:37.168072 systemd-journald[202]: Collecting audit messages is disabled. Sep 9 04:00:37.168113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:00:37.168128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 04:00:37.168143 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:00:37.168164 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 04:00:37.168185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 04:00:37.168202 systemd-journald[202]: Journal started Sep 9 04:00:37.168234 systemd-journald[202]: Runtime Journal (/run/log/journal/3265d375dc0a45839c6108def0d37f54) is 4.7M, max 38.0M, 33.2M free. Sep 9 04:00:37.130494 systemd-modules-load[203]: Inserted module 'overlay' Sep 9 04:00:37.230534 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 04:00:37.230570 kernel: Bridge firewalling registered Sep 9 04:00:37.230590 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:00:37.183204 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 9 04:00:37.240605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:00:37.241706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:00:37.251330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:00:37.255169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:00:37.264154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:00:37.268015 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:00:37.284813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:00:37.287249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:00:37.289318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:00:37.299676 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 04:00:37.302386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:00:37.308215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:00:37.314515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:00:37.321290 dracut-cmdline[232]: dracut-dracut-053 Sep 9 04:00:37.328267 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 04:00:37.367653 systemd-resolved[236]: Positive Trust Anchors: Sep 9 04:00:37.367683 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:00:37.367730 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:00:37.372348 systemd-resolved[236]: Defaulting to hostname 'linux'. Sep 9 04:00:37.375146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:00:37.376000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:00:37.453998 kernel: SCSI subsystem initialized Sep 9 04:00:37.467001 kernel: Loading iSCSI transport class v2.0-870. Sep 9 04:00:37.480993 kernel: iscsi: registered transport (tcp) Sep 9 04:00:37.510329 kernel: iscsi: registered transport (qla4xxx) Sep 9 04:00:37.510438 kernel: QLogic iSCSI HBA Driver Sep 9 04:00:37.572383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 04:00:37.580284 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 04:00:37.616083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 04:00:37.616182 kernel: device-mapper: uevent: version 1.0.3 Sep 9 04:00:37.618545 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 04:00:37.670158 kernel: raid6: sse2x4 gen() 13782 MB/s Sep 9 04:00:37.688003 kernel: raid6: sse2x2 gen() 9275 MB/s Sep 9 04:00:37.706717 kernel: raid6: sse2x1 gen() 10229 MB/s Sep 9 04:00:37.706830 kernel: raid6: using algorithm sse2x4 gen() 13782 MB/s Sep 9 04:00:37.725673 kernel: raid6: .... xor() 7692 MB/s, rmw enabled Sep 9 04:00:37.725862 kernel: raid6: using ssse3x2 recovery algorithm Sep 9 04:00:37.753552 kernel: xor: automatically using best checksumming function avx Sep 9 04:00:37.973013 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 04:00:37.988266 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:00:37.996200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:00:38.024098 systemd-udevd[420]: Using default interface naming scheme 'v255'. Sep 9 04:00:38.031040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:00:38.038375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 04:00:38.066104 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Sep 9 04:00:38.111488 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:00:38.118231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:00:38.243890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:00:38.253233 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 04:00:38.288511 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 04:00:38.290251 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:00:38.292197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:00:38.294882 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:00:38.305180 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 04:00:38.340583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:00:38.396011 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 9 04:00:38.410047 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 04:00:38.416110 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 04:00:38.440910 kernel: AVX version of gcm_enc/dec engaged. Sep 9 04:00:38.441027 kernel: AES CTR mode by8 optimization enabled Sep 9 04:00:38.455892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 04:00:38.466817 kernel: ACPI: bus type USB registered Sep 9 04:00:38.466853 kernel: usbcore: registered new interface driver usbfs Sep 9 04:00:38.466885 kernel: usbcore: registered new interface driver hub Sep 9 04:00:38.466906 kernel: usbcore: registered new device driver usb Sep 9 04:00:38.456124 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:00:38.472522 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:00:38.479376 kernel: libata version 3.00 loaded. Sep 9 04:00:38.474396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:00:38.474632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:00:38.475413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:00:38.486368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:00:38.521447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 04:00:38.521664 kernel: GPT:17805311 != 125829119 Sep 9 04:00:38.521706 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 04:00:38.521738 kernel: GPT:17805311 != 125829119 Sep 9 04:00:38.521770 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 04:00:38.521802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:00:38.536370 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 04:00:38.536763 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 9 04:00:38.539633 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 9 04:00:38.544228 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 04:00:38.544473 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 04:00:38.544496 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 04:00:38.558446 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 04:00:38.558874 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 04:00:38.559163 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 9 04:00:38.559402 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 9 04:00:38.559635 kernel: hub 1-0:1.0: USB hub found Sep 9 04:00:38.559930 kernel: hub 1-0:1.0: 4 ports detected Sep 9 04:00:38.564013 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 9 04:00:38.564374 kernel: hub 2-0:1.0: USB hub found Sep 9 04:00:38.565318 kernel: hub 2-0:1.0: 4 ports detected Sep 9 04:00:38.565548 kernel: scsi host0: ahci Sep 9 04:00:38.566978 kernel: scsi host1: ahci Sep 9 04:00:38.574021 kernel: scsi host2: ahci Sep 9 04:00:38.577278 kernel: scsi host3: ahci Sep 9 04:00:38.579167 kernel: scsi host4: ahci Sep 9 04:00:38.580059 kernel: scsi host5: ahci Sep 9 04:00:38.580675 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 9 04:00:38.580700 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 9 04:00:38.580746 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 9 04:00:38.580767 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 9 04:00:38.580785 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 9 04:00:38.580814 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 9 04:00:38.599009 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Sep 9 04:00:38.599101 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (466) Sep 9 04:00:38.624643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 04:00:38.694303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:00:38.702607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 04:00:38.710092 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 04:00:38.716379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 04:00:38.717282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 04:00:38.734186 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 04:00:38.736156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:00:38.760711 disk-uuid[565]: Primary Header is updated. Sep 9 04:00:38.760711 disk-uuid[565]: Secondary Entries is updated. Sep 9 04:00:38.760711 disk-uuid[565]: Secondary Header is updated. Sep 9 04:00:38.771221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:00:38.774565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:00:38.782004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:00:38.805001 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 9 04:00:38.889462 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.889563 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.894203 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.895984 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.896022 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.901010 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 04:00:38.954990 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 04:00:38.962051 kernel: usbcore: registered new interface driver usbhid Sep 9 04:00:38.962097 kernel: usbhid: USB HID core driver Sep 9 04:00:38.970469 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 9 04:00:38.970513 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 9 04:00:39.783998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:00:39.785306 disk-uuid[569]: The operation has completed successfully. Sep 9 04:00:39.854790 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 04:00:39.855017 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 04:00:39.881258 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 04:00:39.886697 sh[585]: Success Sep 9 04:00:39.906306 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 9 04:00:39.978598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 04:00:39.982109 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 04:00:39.984532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 04:00:40.017103 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 04:00:40.017174 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 04:00:40.019237 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 04:00:40.021431 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 04:00:40.023064 kernel: BTRFS info (device dm-0): using free space tree Sep 9 04:00:40.032951 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 04:00:40.035257 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 04:00:40.047192 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 04:00:40.050842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 04:00:40.068996 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 04:00:40.069063 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 04:00:40.069146 kernel: BTRFS info (device vda6): using free space tree Sep 9 04:00:40.075007 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 04:00:40.090505 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 04:00:40.093275 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 04:00:40.100428 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 04:00:40.108172 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 04:00:40.279290 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:00:40.301376 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:00:40.336756 systemd-networkd[767]: lo: Link UP Sep 9 04:00:40.336770 systemd-networkd[767]: lo: Gained carrier Sep 9 04:00:40.339591 systemd-networkd[767]: Enumeration completed Sep 9 04:00:40.339751 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:00:40.340721 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:00:40.340727 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:00:40.341491 systemd[1]: Reached target network.target - Network. Sep 9 04:00:40.343023 systemd-networkd[767]: eth0: Link UP Sep 9 04:00:40.343029 systemd-networkd[767]: eth0: Gained carrier Sep 9 04:00:40.343041 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:00:40.360913 ignition[667]: Ignition 2.19.0 Sep 9 04:00:40.360939 ignition[667]: Stage: fetch-offline Sep 9 04:00:40.361048 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:40.363410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:00:40.361080 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:40.361247 ignition[667]: parsed url from cmdline: "" Sep 9 04:00:40.361254 ignition[667]: no config URL provided Sep 9 04:00:40.361264 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:00:40.361282 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:00:40.361291 ignition[667]: failed to fetch config: resource requires networking Sep 9 04:00:40.361632 ignition[667]: Ignition finished successfully Sep 9 04:00:40.374295 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 04:00:40.415125 systemd-networkd[767]: eth0: DHCPv4 address 10.230.58.214/30, gateway 10.230.58.213 acquired from 10.230.58.213 Sep 9 04:00:40.417567 ignition[775]: Ignition 2.19.0 Sep 9 04:00:40.417579 ignition[775]: Stage: fetch Sep 9 04:00:40.417950 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:40.419009 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:40.421559 ignition[775]: parsed url from cmdline: "" Sep 9 04:00:40.421568 ignition[775]: no config URL provided Sep 9 04:00:40.421585 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:00:40.421607 ignition[775]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:00:40.423058 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 9 04:00:40.424525 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 9 04:00:40.424564 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 9 04:00:40.446112 ignition[775]: GET result: OK Sep 9 04:00:40.446423 ignition[775]: parsing config with SHA512: 155d0c1dd896cbe1c5d694a6d09638efc980f38cf852adb29470418485bfd4b4406777665fb72581e977a37777998d025cb02496c52674359ea21d7504e9b439 Sep 9 04:00:40.457167 unknown[775]: fetched base config from "system" Sep 9 04:00:40.457187 unknown[775]: fetched base config from "system" Sep 9 04:00:40.458091 ignition[775]: fetch: fetch complete Sep 9 04:00:40.457197 unknown[775]: fetched user config from "openstack" Sep 9 04:00:40.458101 ignition[775]: fetch: fetch passed Sep 9 04:00:40.460933 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 04:00:40.458201 ignition[775]: Ignition finished successfully Sep 9 04:00:40.476675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 04:00:40.542465 ignition[782]: Ignition 2.19.0 Sep 9 04:00:40.542490 ignition[782]: Stage: kargs Sep 9 04:00:40.542841 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:40.542863 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:40.547936 ignition[782]: kargs: kargs passed Sep 9 04:00:40.548046 ignition[782]: Ignition finished successfully Sep 9 04:00:40.549512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 04:00:40.563327 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 04:00:40.590364 ignition[788]: Ignition 2.19.0 Sep 9 04:00:40.590380 ignition[788]: Stage: disks Sep 9 04:00:40.590703 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:40.590727 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:40.592431 ignition[788]: disks: disks passed Sep 9 04:00:40.595407 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 04:00:40.592512 ignition[788]: Ignition finished successfully Sep 9 04:00:40.597293 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 04:00:40.598416 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 04:00:40.600060 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:00:40.601615 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:00:40.603277 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:00:40.611053 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 04:00:40.647336 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 9 04:00:40.653347 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 04:00:40.660077 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 04:00:40.795016 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 04:00:40.795560 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 04:00:40.797048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 04:00:40.804115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:00:40.807134 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 04:00:40.808273 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 04:00:40.812218 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 9 04:00:40.814410 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 04:00:40.814458 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:00:40.829088 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Sep 9 04:00:40.830788 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 04:00:40.838995 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 04:00:40.839066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 04:00:40.842572 kernel: BTRFS info (device vda6): using free space tree Sep 9 04:00:40.847491 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 04:00:40.858516 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 04:00:40.874230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:00:40.911648 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 04:00:40.919978 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Sep 9 04:00:40.934497 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 04:00:40.947781 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 04:00:41.108080 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 04:00:41.115097 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 04:00:41.119117 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 04:00:41.132885 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 04:00:41.137033 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 04:00:41.170645 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 04:00:41.186586 ignition[921]: INFO : Ignition 2.19.0 Sep 9 04:00:41.187867 ignition[921]: INFO : Stage: mount Sep 9 04:00:41.188854 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:41.191066 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:41.192146 ignition[921]: INFO : mount: mount passed Sep 9 04:00:41.192939 ignition[921]: INFO : Ignition finished successfully Sep 9 04:00:41.195059 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 04:00:42.311853 systemd-networkd[767]: eth0: Gained IPv6LL Sep 9 04:00:43.822558 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eb5:24:19ff:fee6:3ad6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eb5:24:19ff:fee6:3ad6/64 assigned by NDisc. Sep 9 04:00:43.822572 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 04:00:48.075394 coreos-metadata[807]: Sep 09 04:00:48.075 WARN failed to locate config-drive, using the metadata service API instead Sep 9 04:00:48.098462 coreos-metadata[807]: Sep 09 04:00:48.098 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 04:00:48.118028 coreos-metadata[807]: Sep 09 04:00:48.117 INFO Fetch successful Sep 9 04:00:48.119351 coreos-metadata[807]: Sep 09 04:00:48.119 INFO wrote hostname srv-gbnqu.gb1.brightbox.com to /sysroot/etc/hostname Sep 9 04:00:48.123950 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 9 04:00:48.124554 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 9 04:00:48.135145 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 04:00:48.177321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:00:48.195023 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Sep 9 04:00:48.200298 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 04:00:48.200347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 04:00:48.200368 kernel: BTRFS info (device vda6): using free space tree Sep 9 04:00:48.207010 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 04:00:48.210699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:00:48.248597 ignition[957]: INFO : Ignition 2.19.0 Sep 9 04:00:48.248597 ignition[957]: INFO : Stage: files Sep 9 04:00:48.250516 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:48.250516 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:48.250516 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 9 04:00:48.253430 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 04:00:48.253430 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 04:00:48.255804 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 04:00:48.255804 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 04:00:48.255804 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 04:00:48.255742 unknown[957]: wrote ssh authorized keys file for user: core Sep 9 04:00:48.260042 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 04:00:48.260042 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 04:00:48.591872 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 04:00:49.796631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 04:00:49.796631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 04:00:49.796631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 04:00:49.796631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 04:00:49.808152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 04:00:50.147348 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 04:00:51.847460 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 04:00:51.847460 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 04:00:51.851414 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:00:51.862617 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:00:51.862617 ignition[957]: INFO : files: files passed Sep 9 04:00:51.862617 ignition[957]: INFO : Ignition finished successfully Sep 9 04:00:51.856831 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 04:00:51.870287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 04:00:51.874167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 04:00:51.886339 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 04:00:51.886521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 04:00:51.902437 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:00:51.902437 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:00:51.905646 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:00:51.906118 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:00:51.908203 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 04:00:51.954341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 04:00:51.999686 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 04:00:51.999903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 04:00:52.001892 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 04:00:52.003258 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 04:00:52.004947 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 04:00:52.021181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 04:00:52.039830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:00:52.052232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 04:00:52.065755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:00:52.067895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:00:52.069796 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 04:00:52.070689 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 04:00:52.070880 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:00:52.072864 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 04:00:52.073837 systemd[1]: Stopped target basic.target - Basic System. Sep 9 04:00:52.075402 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 04:00:52.076975 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:00:52.078418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 04:00:52.080065 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 04:00:52.081649 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:00:52.083360 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 04:00:52.084925 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 04:00:52.086567 systemd[1]: Stopped target swap.target - Swaps. Sep 9 04:00:52.088061 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 04:00:52.088263 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:00:52.090046 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:00:52.091081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:00:52.092570 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 04:00:52.094121 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:00:52.095219 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 04:00:52.095515 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 04:00:52.097362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 04:00:52.097566 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:00:52.098681 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 04:00:52.098915 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 04:00:52.106242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 04:00:52.107652 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 04:00:52.107875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:00:52.112581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 04:00:52.115109 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 04:00:52.116232 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:00:52.119294 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 04:00:52.120454 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:00:52.137299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 04:00:52.137473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 04:00:52.151011 ignition[1009]: INFO : Ignition 2.19.0 Sep 9 04:00:52.153874 ignition[1009]: INFO : Stage: umount Sep 9 04:00:52.153874 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:00:52.153874 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 04:00:52.153874 ignition[1009]: INFO : umount: umount passed Sep 9 04:00:52.153874 ignition[1009]: INFO : Ignition finished successfully Sep 9 04:00:52.158625 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 04:00:52.159690 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 04:00:52.162330 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 04:00:52.162405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 04:00:52.164869 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 04:00:52.164946 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 04:00:52.166250 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 04:00:52.166322 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 04:00:52.168207 systemd[1]: Stopped target network.target - Network. Sep 9 04:00:52.169678 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 04:00:52.169808 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:00:52.173122 systemd[1]: Stopped target paths.target - Path Units. Sep 9 04:00:52.174650 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 04:00:52.175040 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:00:52.176178 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 04:00:52.177675 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 04:00:52.179417 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 04:00:52.179515 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:00:52.180797 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 04:00:52.180877 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:00:52.182153 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 04:00:52.182254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 04:00:52.183534 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 04:00:52.183607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 04:00:52.185272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 04:00:52.187847 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 04:00:52.192318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 04:00:52.193301 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 9 04:00:52.195262 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 04:00:52.195403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 04:00:52.199200 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 04:00:52.199373 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 04:00:52.203780 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 04:00:52.203906 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:00:52.204941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 04:00:52.205076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 04:00:52.215102 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 04:00:52.215831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 04:00:52.215934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:00:52.219159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:00:52.221167 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 04:00:52.221364 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 04:00:52.234293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 04:00:52.234464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:00:52.237153 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 04:00:52.237228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 04:00:52.238198 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 04:00:52.238286 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:00:52.240473 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 04:00:52.240765 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:00:52.242967 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 04:00:52.243104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 04:00:52.245240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 04:00:52.245360 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 04:00:52.246711 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 04:00:52.246772 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:00:52.252527 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 04:00:52.252619 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:00:52.254823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 04:00:52.254895 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 04:00:52.256232 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 04:00:52.256319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:00:52.263159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 04:00:52.263970 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 04:00:52.264046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:00:52.266184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:00:52.266284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:00:52.277700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 04:00:52.277882 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 04:00:52.279701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 04:00:52.289185 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 04:00:52.300112 systemd[1]: Switching root. Sep 9 04:00:52.343397 systemd-journald[202]: Journal stopped Sep 9 04:00:53.977612 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 9 04:00:53.982611 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 04:00:53.982710 kernel: SELinux: policy capability open_perms=1 Sep 9 04:00:53.982775 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 04:00:53.982844 kernel: SELinux: policy capability always_check_network=0 Sep 9 04:00:53.982893 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 04:00:53.982916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 04:00:53.982984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 04:00:53.983033 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 04:00:53.983057 kernel: audit: type=1403 audit(1757390452.611:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 04:00:53.983167 systemd[1]: Successfully loaded SELinux policy in 63.079ms. Sep 9 04:00:53.983324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.847ms. Sep 9 04:00:53.983396 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 04:00:53.983445 systemd[1]: Detected virtualization kvm. Sep 9 04:00:53.983472 systemd[1]: Detected architecture x86-64. Sep 9 04:00:53.983532 systemd[1]: Detected first boot. Sep 9 04:00:53.983569 systemd[1]: Hostname set to . Sep 9 04:00:53.983592 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:00:53.983636 zram_generator::config[1052]: No configuration found. Sep 9 04:00:53.983703 systemd[1]: Populated /etc with preset unit settings. Sep 9 04:00:53.983777 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 04:00:53.983824 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 04:00:53.983848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 04:00:53.983898 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 04:00:53.986565 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 04:00:53.986599 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 04:00:53.986621 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 04:00:53.986642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 04:00:53.986663 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 04:00:53.986730 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 04:00:53.986754 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 04:00:53.986775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:00:53.986797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:00:53.986818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 04:00:53.986839 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 04:00:53.986871 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 04:00:53.986903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:00:53.986927 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 04:00:53.987023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:00:53.987048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 04:00:53.987070 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 04:00:53.987091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 04:00:53.987122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 04:00:53.987145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:00:53.987213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:00:53.987247 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:00:53.987293 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:00:53.987316 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 04:00:53.987337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 04:00:53.987358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:00:53.987411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:00:53.987443 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:00:53.987505 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 04:00:53.987580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 04:00:53.987635 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 04:00:53.987661 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 04:00:53.987682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:53.987714 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 04:00:53.987771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 04:00:53.987823 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 04:00:53.987859 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 04:00:53.987881 systemd[1]: Reached target machines.target - Containers. Sep 9 04:00:53.987932 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 04:00:53.990260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:00:53.990291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:00:53.990314 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 04:00:53.990335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:00:53.990403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:00:53.990428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:00:53.990450 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 04:00:53.990483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:00:53.990507 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 04:00:53.990529 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 04:00:53.990550 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 04:00:53.990571 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 04:00:53.990623 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 04:00:53.990679 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:00:53.990704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:00:53.990725 kernel: loop: module loaded Sep 9 04:00:53.990747 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:00:53.990768 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 04:00:53.990789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:00:53.990810 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 04:00:53.990831 systemd[1]: Stopped verity-setup.service. Sep 9 04:00:53.990864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:53.990917 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 04:00:53.990942 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 04:00:53.991014 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 04:00:53.991040 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 04:00:53.991095 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 04:00:53.991129 kernel: ACPI: bus type drm_connector registered Sep 9 04:00:53.991151 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 04:00:53.991173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:00:53.991195 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 04:00:53.991243 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 04:00:53.991268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:00:53.991290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:00:53.991346 kernel: fuse: init (API version 7.39) Sep 9 04:00:53.991428 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 04:00:53.991506 systemd-journald[1149]: Collecting audit messages is disabled. Sep 9 04:00:53.991667 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:00:53.991694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:00:53.991763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:00:53.991788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:00:53.991810 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 04:00:53.991831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 04:00:53.991853 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:00:53.991905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:00:53.991930 systemd-journald[1149]: Journal started Sep 9 04:00:53.992004 systemd-journald[1149]: Runtime Journal (/run/log/journal/3265d375dc0a45839c6108def0d37f54) is 4.7M, max 38.0M, 33.2M free. Sep 9 04:00:53.459810 systemd[1]: Queued start job for default target multi-user.target. Sep 9 04:00:53.483945 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 04:00:53.484947 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 04:00:53.996042 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:00:53.997729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:00:53.999622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:00:54.000993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 04:00:54.017364 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:00:54.027098 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 04:00:54.043136 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 04:00:54.044150 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 04:00:54.044201 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:00:54.049411 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 04:00:54.057878 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 04:00:54.065151 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 04:00:54.066300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:00:54.073227 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 04:00:54.076209 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 04:00:54.077073 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:00:54.079174 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 04:00:54.081096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:00:54.084502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:00:54.088655 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 04:00:54.100196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 04:00:54.108766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 04:00:54.109799 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 04:00:54.112034 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 04:00:54.264026 kernel: loop0: detected capacity change from 0 to 221472 Sep 9 04:00:54.274125 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 04:00:54.276563 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 04:00:54.286225 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 04:00:54.298023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:00:54.308362 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 04:00:54.324680 systemd-journald[1149]: Time spent on flushing to /var/log/journal/3265d375dc0a45839c6108def0d37f54 is 68.919ms for 1144 entries. Sep 9 04:00:54.324680 systemd-journald[1149]: System Journal (/var/log/journal/3265d375dc0a45839c6108def0d37f54) is 8.0M, max 584.8M, 576.8M free. Sep 9 04:00:54.425476 systemd-journald[1149]: Received client request to flush runtime journal. Sep 9 04:00:54.425544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 04:00:54.425575 kernel: loop1: detected capacity change from 0 to 8 Sep 9 04:00:54.382304 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 04:00:54.392920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 04:00:54.406823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:00:54.427140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:00:54.433308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 04:00:54.475097 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 04:00:54.478999 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 04:00:54.483542 kernel: loop2: detected capacity change from 0 to 140768 Sep 9 04:00:54.487631 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 9 04:00:54.487663 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 9 04:00:54.502398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:00:54.673076 kernel: loop3: detected capacity change from 0 to 142488 Sep 9 04:00:54.742538 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 04:00:54.784906 kernel: loop5: detected capacity change from 0 to 8 Sep 9 04:00:54.794159 kernel: loop6: detected capacity change from 0 to 140768 Sep 9 04:00:54.831397 kernel: loop7: detected capacity change from 0 to 142488 Sep 9 04:00:54.960731 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 9 04:00:54.961885 (sd-merge)[1213]: Merged extensions into '/usr'. Sep 9 04:00:54.990132 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 04:00:54.990160 systemd[1]: Reloading... Sep 9 04:00:55.179866 zram_generator::config[1236]: No configuration found. Sep 9 04:00:55.233022 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 04:00:55.443822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 04:00:55.514186 systemd[1]: Reloading finished in 523 ms. Sep 9 04:00:55.552292 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 04:00:55.553758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 04:00:55.570471 systemd[1]: Starting ensure-sysext.service... Sep 9 04:00:55.580858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:00:55.601243 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Sep 9 04:00:55.601273 systemd[1]: Reloading... Sep 9 04:00:55.631764 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 04:00:55.632741 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 04:00:55.635235 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 04:00:55.635824 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Sep 9 04:00:55.636103 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Sep 9 04:00:55.644567 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:00:55.644871 systemd-tmpfiles[1296]: Skipping /boot Sep 9 04:00:55.663593 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:00:55.663808 systemd-tmpfiles[1296]: Skipping /boot Sep 9 04:00:55.712002 zram_generator::config[1323]: No configuration found. Sep 9 04:00:55.889807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 04:00:55.958281 systemd[1]: Reloading finished in 356 ms. Sep 9 04:00:55.980664 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 04:00:55.993709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:00:56.005331 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 04:00:56.015184 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 04:00:56.022157 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 04:00:56.028184 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:00:56.037151 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:00:56.040174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 04:00:56.050594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.050894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:00:56.060225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:00:56.071425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:00:56.081267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:00:56.082265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:00:56.082442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.094041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 04:00:56.098593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.098876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:00:56.099131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:00:56.099266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.106411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 04:00:56.109683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.112120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:00:56.135093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:00:56.136073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:00:56.147308 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 04:00:56.149039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 04:00:56.152033 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 04:00:56.153512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:00:56.153754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:00:56.164226 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 04:00:56.168063 systemd[1]: Finished ensure-sysext.service. Sep 9 04:00:56.173641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:00:56.174438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:00:56.176728 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 04:00:56.186075 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:00:56.186381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:00:56.195578 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:00:56.195853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:00:56.198386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:00:56.198496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:00:56.211271 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 04:00:56.221052 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 04:00:56.241004 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 04:00:56.247286 augenrules[1424]: No rules Sep 9 04:00:56.250542 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 04:00:56.261635 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Sep 9 04:00:56.319213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:00:56.323967 systemd-resolved[1386]: Positive Trust Anchors: Sep 9 04:00:56.326019 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:00:56.327078 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:00:56.330209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:00:56.346244 systemd-resolved[1386]: Using system hostname 'srv-gbnqu.gb1.brightbox.com'. Sep 9 04:00:56.354296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:00:56.355377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:00:56.382575 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 04:00:56.383612 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 04:00:56.446198 systemd-networkd[1432]: lo: Link UP Sep 9 04:00:56.446212 systemd-networkd[1432]: lo: Gained carrier Sep 9 04:00:56.447471 systemd-networkd[1432]: Enumeration completed Sep 9 04:00:56.447738 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:00:56.449086 systemd[1]: Reached target network.target - Network. Sep 9 04:00:56.457559 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 04:00:56.562281 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 04:00:56.619409 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:00:56.619999 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:00:56.627171 systemd-networkd[1432]: eth0: Link UP Sep 9 04:00:56.628225 systemd-networkd[1432]: eth0: Gained carrier Sep 9 04:00:56.628451 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:00:56.687501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1448) Sep 9 04:00:56.691094 systemd-networkd[1432]: eth0: DHCPv4 address 10.230.58.214/30, gateway 10.230.58.213 acquired from 10.230.58.213 Sep 9 04:00:56.693124 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Sep 9 04:00:56.735990 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 04:00:56.755214 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 04:00:56.775031 kernel: ACPI: button: Power Button [PWRF] Sep 9 04:00:56.829988 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 04:00:56.834144 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 04:00:56.834488 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 04:00:56.844238 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 9 04:00:56.896263 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 04:00:56.905252 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 04:00:56.931114 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 04:00:56.947273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:00:57.146589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:00:57.168634 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 04:00:57.177396 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 04:00:57.205020 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 04:00:57.244370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 04:00:57.245575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:00:57.246498 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:00:57.247483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 04:00:57.248484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 04:00:57.249701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 04:00:57.250701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 04:00:57.251554 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 04:00:57.252347 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 04:00:57.252410 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:00:57.253068 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:00:57.255335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 04:00:57.258161 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 04:00:57.263538 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 04:00:57.266609 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 04:00:57.268136 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 04:00:57.269097 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:00:57.269816 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:00:57.270581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:00:57.270636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:00:57.285900 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 04:00:57.290355 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 04:00:57.293678 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 04:00:57.300194 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 04:00:57.305120 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 04:00:57.308784 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 04:00:57.310050 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 04:00:57.320172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 04:00:57.468572 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 04:00:57.477201 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 04:00:57.485744 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 04:00:57.500197 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 04:00:57.508244 jq[1477]: false Sep 9 04:00:57.512327 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 04:00:57.515523 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 04:00:57.523187 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 04:00:57.536143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 04:00:57.539778 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 04:00:57.544723 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 04:00:57.545071 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 04:00:57.549482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 04:00:57.549762 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 04:00:57.560242 dbus-daemon[1476]: [system] SELinux support is enabled Sep 9 04:00:57.560534 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 04:00:57.571979 extend-filesystems[1478]: Found loop4 Sep 9 04:00:57.565911 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 04:00:57.579580 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 04:00:57.587808 extend-filesystems[1478]: Found loop5 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found loop6 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found loop7 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda1 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda2 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda3 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found usr Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda4 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda6 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda7 Sep 9 04:00:57.587808 extend-filesystems[1478]: Found vda9 Sep 9 04:00:57.587808 extend-filesystems[1478]: Checking size of /dev/vda9 Sep 9 04:00:57.565973 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 04:00:57.566899 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 04:00:57.613247 jq[1492]: true Sep 9 04:00:57.566941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 04:00:57.595219 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 04:00:57.599858 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 04:00:57.600428 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 04:00:57.647631 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 04:00:57.654363 tar[1496]: linux-amd64/helm Sep 9 04:00:57.654745 extend-filesystems[1478]: Resized partition /dev/vda9 Sep 9 04:00:57.665737 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Sep 9 04:00:57.670137 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 9 04:00:57.684837 jq[1510]: true Sep 9 04:00:57.719068 update_engine[1489]: I20250909 04:00:57.716090 1489 main.cc:92] Flatcar Update Engine starting Sep 9 04:00:57.741997 update_engine[1489]: I20250909 04:00:57.731237 1489 update_check_scheduler.cc:74] Next update check in 2m13s Sep 9 04:00:57.731625 systemd[1]: Started update-engine.service - Update Engine. Sep 9 04:00:57.746274 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 04:00:57.774425 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 04:00:57.774480 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 04:00:57.774837 systemd-logind[1485]: New seat seat0. Sep 9 04:00:57.782069 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 04:00:57.933608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1448) Sep 9 04:00:57.936123 systemd-networkd[1432]: eth0: Gained IPv6LL Sep 9 04:00:57.937439 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Sep 9 04:00:57.939594 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 04:00:57.945488 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 04:00:57.955327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:00:57.963318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 04:00:58.254141 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 04:00:58.254793 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 04:00:58.258432 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:00:58.257908 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 04:00:58.301374 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 04:00:58.306488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 04:00:58.323430 systemd[1]: Starting sshkeys.service... Sep 9 04:00:58.329561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 04:00:58.381699 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 04:00:58.393674 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 04:00:58.433618 polkitd[1547]: Started polkitd version 121 Sep 9 04:00:58.496060 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Sep 9 04:00:58.507873 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eb5:24:19ff:fee6:3ad6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eb5:24:19ff:fee6:3ad6/64 assigned by NDisc. Sep 9 04:00:58.507881 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 04:00:58.522313 polkitd[1547]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 04:00:58.522431 polkitd[1547]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 04:00:58.532144 polkitd[1547]: Finished loading, compiling and executing 2 rules Sep 9 04:00:58.542291 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 04:00:58.542596 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 04:00:58.551758 polkitd[1547]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 04:00:58.565426 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 04:00:58.638478 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 04:00:58.638478 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 04:00:58.638478 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 04:00:58.662105 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Sep 9 04:00:58.639341 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 04:00:58.640263 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 04:00:58.657600 systemd-hostnamed[1505]: Hostname set to (static) Sep 9 04:00:58.691562 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 04:00:58.774778 containerd[1511]: time="2025-09-09T04:00:58.773312780Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 04:00:58.997175 containerd[1511]: time="2025-09-09T04:00:58.996309552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.003524 containerd[1511]: time="2025-09-09T04:00:59.003464617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 04:00:59.003680 containerd[1511]: time="2025-09-09T04:00:59.003653343Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 04:00:59.004946 containerd[1511]: time="2025-09-09T04:00:59.003840502Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 04:00:59.004946 containerd[1511]: time="2025-09-09T04:00:59.004274373Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 04:00:59.004946 containerd[1511]: time="2025-09-09T04:00:59.004313903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.004946 containerd[1511]: time="2025-09-09T04:00:59.004445614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 04:00:59.004946 containerd[1511]: time="2025-09-09T04:00:59.004472703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.005797 containerd[1511]: time="2025-09-09T04:00:59.005762376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 04:00:59.006428 containerd[1511]: time="2025-09-09T04:00:59.006362920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.006554 containerd[1511]: time="2025-09-09T04:00:59.006408622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 04:00:59.006672 containerd[1511]: time="2025-09-09T04:00:59.006646995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.008331 containerd[1511]: time="2025-09-09T04:00:59.007811404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.009287 containerd[1511]: time="2025-09-09T04:00:59.009245989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 04:00:59.010018 containerd[1511]: time="2025-09-09T04:00:59.009979520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 04:00:59.010203 containerd[1511]: time="2025-09-09T04:00:59.010136981Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 04:00:59.012211 containerd[1511]: time="2025-09-09T04:00:59.011611214Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 04:00:59.012211 containerd[1511]: time="2025-09-09T04:00:59.011774938Z" level=info msg="metadata content store policy set" policy=shared Sep 9 04:00:59.021541 containerd[1511]: time="2025-09-09T04:00:59.021341100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.021695769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.021755063Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.021786712Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.021817953Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.022110186Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.022559611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.022825723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.022853764Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 04:00:59.024191 containerd[1511]: time="2025-09-09T04:00:59.022904924Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 04:00:59.025997 containerd[1511]: time="2025-09-09T04:00:59.022951875Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.026641847Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027056879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027100081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027138212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027172492Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027200362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027245667Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027300888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027330394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027353669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027379756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027403420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027424204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.027931 containerd[1511]: time="2025-09-09T04:00:59.027449517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027475837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027499876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027563070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027612212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027642517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027668545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027726406Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027777727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027802345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.028485 containerd[1511]: time="2025-09-09T04:00:59.027836400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030208450Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030404694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030429622Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030450210Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030470543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030505330Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030530735Z" level=info msg="NRI interface is disabled by configuration." Sep 9 04:00:59.032180 containerd[1511]: time="2025-09-09T04:00:59.030549610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 04:00:59.035555 containerd[1511]: time="2025-09-09T04:00:59.032903631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 04:00:59.035555 containerd[1511]: time="2025-09-09T04:00:59.033034708Z" level=info msg="Connect containerd service" Sep 9 04:00:59.035555 containerd[1511]: time="2025-09-09T04:00:59.033109472Z" level=info msg="using legacy CRI server" Sep 9 04:00:59.035555 containerd[1511]: time="2025-09-09T04:00:59.033127992Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 04:00:59.035555 containerd[1511]: time="2025-09-09T04:00:59.033343774Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.039633032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041333997Z" level=info msg="Start subscribing containerd event" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041442391Z" level=info msg="Start recovering state" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041584253Z" level=info msg="Start event monitor" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041646513Z" level=info msg="Start snapshots syncer" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041673317Z" level=info msg="Start cni network conf syncer for default" Sep 9 04:00:59.042603 containerd[1511]: time="2025-09-09T04:00:59.041693374Z" level=info msg="Start streaming server" Sep 9 04:00:59.053151 containerd[1511]: time="2025-09-09T04:00:59.048785885Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 04:00:59.053151 containerd[1511]: time="2025-09-09T04:00:59.049151003Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 04:00:59.055279 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 04:00:59.061841 containerd[1511]: time="2025-09-09T04:00:59.061777025Z" level=info msg="containerd successfully booted in 0.296363s" Sep 9 04:00:59.247322 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 04:00:59.294021 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 04:00:59.309199 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 04:00:59.329730 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 04:00:59.330086 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 04:00:59.347238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 04:00:59.523898 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 04:00:59.536665 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 04:00:59.546975 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 04:00:59.548217 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 04:00:59.623725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 04:00:59.632625 systemd[1]: Started sshd@0-10.230.58.214:22-147.75.109.163:47924.service - OpenSSH per-connection server daemon (147.75.109.163:47924). Sep 9 04:00:59.902934 tar[1496]: linux-amd64/LICENSE Sep 9 04:00:59.902934 tar[1496]: linux-amd64/README.md Sep 9 04:00:59.920257 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 04:01:00.233142 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Sep 9 04:01:00.683137 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 47924 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:00.686239 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:00.708775 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 04:01:00.722631 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 04:01:00.730933 systemd-logind[1485]: New session 1 of user core. Sep 9 04:01:00.785441 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 04:01:00.797893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:00.809542 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:00.810081 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 04:01:00.820402 (systemd)[1609]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 04:01:00.983265 systemd[1609]: Queued start job for default target default.target. Sep 9 04:01:00.990507 systemd[1609]: Created slice app.slice - User Application Slice. Sep 9 04:01:00.990738 systemd[1609]: Reached target paths.target - Paths. Sep 9 04:01:00.990764 systemd[1609]: Reached target timers.target - Timers. Sep 9 04:01:00.996447 systemd[1609]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 04:01:01.025437 systemd[1609]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 04:01:01.025649 systemd[1609]: Reached target sockets.target - Sockets. Sep 9 04:01:01.025675 systemd[1609]: Reached target basic.target - Basic System. Sep 9 04:01:01.025756 systemd[1609]: Reached target default.target - Main User Target. Sep 9 04:01:01.025844 systemd[1609]: Startup finished in 194ms. Sep 9 04:01:01.025864 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 04:01:01.033279 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 04:01:01.677748 kubelet[1607]: E0909 04:01:01.677658 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:01.681430 systemd[1]: Started sshd@1-10.230.58.214:22-147.75.109.163:47932.service - OpenSSH per-connection server daemon (147.75.109.163:47932). Sep 9 04:01:01.689409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:01.690064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:01.691046 systemd[1]: kubelet.service: Consumed 1.938s CPU time. Sep 9 04:01:02.576572 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 47932 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:02.579208 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:02.589037 systemd-logind[1485]: New session 2 of user core. Sep 9 04:01:02.595339 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 04:01:03.202780 sshd[1625]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:03.207684 systemd[1]: sshd@1-10.230.58.214:22-147.75.109.163:47932.service: Deactivated successfully. Sep 9 04:01:03.210258 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 04:01:03.212808 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Sep 9 04:01:03.214664 systemd-logind[1485]: Removed session 2. Sep 9 04:01:03.364434 systemd[1]: Started sshd@2-10.230.58.214:22-147.75.109.163:47948.service - OpenSSH per-connection server daemon (147.75.109.163:47948). Sep 9 04:01:04.252757 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 47948 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:04.255889 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:04.263594 systemd-logind[1485]: New session 3 of user core. Sep 9 04:01:04.274319 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 04:01:04.593211 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 04:01:04.603034 systemd-logind[1485]: New session 4 of user core. Sep 9 04:01:04.612700 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 04:01:04.613497 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 04:01:04.624021 systemd-logind[1485]: New session 5 of user core. Sep 9 04:01:04.633446 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 04:01:04.743330 coreos-metadata[1475]: Sep 09 04:01:04.743 WARN failed to locate config-drive, using the metadata service API instead Sep 9 04:01:04.779191 coreos-metadata[1475]: Sep 09 04:01:04.779 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 9 04:01:04.787252 coreos-metadata[1475]: Sep 09 04:01:04.787 INFO Fetch failed with 404: resource not found Sep 9 04:01:04.787356 coreos-metadata[1475]: Sep 09 04:01:04.787 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 04:01:04.788151 coreos-metadata[1475]: Sep 09 04:01:04.788 INFO Fetch successful Sep 9 04:01:04.788358 coreos-metadata[1475]: Sep 09 04:01:04.788 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 9 04:01:04.804737 coreos-metadata[1475]: Sep 09 04:01:04.804 INFO Fetch successful Sep 9 04:01:04.804909 coreos-metadata[1475]: Sep 09 04:01:04.804 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 9 04:01:04.872538 sshd[1634]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:04.877304 systemd[1]: sshd@2-10.230.58.214:22-147.75.109.163:47948.service: Deactivated successfully. Sep 9 04:01:04.880028 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 04:01:04.882594 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Sep 9 04:01:04.884997 systemd-logind[1485]: Removed session 3. Sep 9 04:01:05.059436 coreos-metadata[1475]: Sep 09 04:01:05.059 INFO Fetch successful Sep 9 04:01:05.059728 coreos-metadata[1475]: Sep 09 04:01:05.059 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 9 04:01:05.086202 coreos-metadata[1475]: Sep 09 04:01:05.086 INFO Fetch successful Sep 9 04:01:05.086397 coreos-metadata[1475]: Sep 09 04:01:05.086 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 9 04:01:05.113549 coreos-metadata[1475]: Sep 09 04:01:05.113 INFO Fetch successful Sep 9 04:01:05.142091 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 04:01:05.143696 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 04:01:05.791608 coreos-metadata[1556]: Sep 09 04:01:05.791 WARN failed to locate config-drive, using the metadata service API instead Sep 9 04:01:05.816321 coreos-metadata[1556]: Sep 09 04:01:05.816 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 9 04:01:05.867095 coreos-metadata[1556]: Sep 09 04:01:05.867 INFO Fetch successful Sep 9 04:01:05.867412 coreos-metadata[1556]: Sep 09 04:01:05.867 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 04:01:05.895862 coreos-metadata[1556]: Sep 09 04:01:05.895 INFO Fetch successful Sep 9 04:01:05.898916 unknown[1556]: wrote ssh authorized keys file for user: core Sep 9 04:01:05.928067 update-ssh-keys[1673]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:01:05.928996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 04:01:05.933233 systemd[1]: Finished sshkeys.service. Sep 9 04:01:05.934590 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 04:01:05.935047 systemd[1]: Startup finished in 1.699s (kernel) + 15.828s (initrd) + 13.383s (userspace) = 30.911s. Sep 9 04:01:11.726285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 04:01:11.738290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:12.036827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:12.052548 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:12.121647 kubelet[1685]: E0909 04:01:12.121555 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:12.125381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:12.125620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:15.032607 systemd[1]: Started sshd@3-10.230.58.214:22-147.75.109.163:32938.service - OpenSSH per-connection server daemon (147.75.109.163:32938). Sep 9 04:01:15.912211 sshd[1693]: Accepted publickey for core from 147.75.109.163 port 32938 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:15.914512 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:15.922668 systemd-logind[1485]: New session 6 of user core. Sep 9 04:01:15.930188 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 04:01:16.528815 sshd[1693]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:16.532807 systemd[1]: sshd@3-10.230.58.214:22-147.75.109.163:32938.service: Deactivated successfully. Sep 9 04:01:16.535023 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 04:01:16.537126 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Sep 9 04:01:16.539112 systemd-logind[1485]: Removed session 6. Sep 9 04:01:16.708359 systemd[1]: Started sshd@4-10.230.58.214:22-147.75.109.163:32952.service - OpenSSH per-connection server daemon (147.75.109.163:32952). Sep 9 04:01:17.595139 sshd[1700]: Accepted publickey for core from 147.75.109.163 port 32952 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:17.597287 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:17.603374 systemd-logind[1485]: New session 7 of user core. Sep 9 04:01:17.619195 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 04:01:18.221059 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:18.227771 systemd[1]: sshd@4-10.230.58.214:22-147.75.109.163:32952.service: Deactivated successfully. Sep 9 04:01:18.230603 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 04:01:18.231660 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Sep 9 04:01:18.233364 systemd-logind[1485]: Removed session 7. Sep 9 04:01:18.380287 systemd[1]: Started sshd@5-10.230.58.214:22-147.75.109.163:32960.service - OpenSSH per-connection server daemon (147.75.109.163:32960). Sep 9 04:01:19.301836 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 32960 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:19.304064 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:19.313724 systemd-logind[1485]: New session 8 of user core. Sep 9 04:01:19.316215 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 04:01:19.949474 sshd[1707]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:19.953252 systemd[1]: sshd@5-10.230.58.214:22-147.75.109.163:32960.service: Deactivated successfully. Sep 9 04:01:19.955493 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 04:01:19.957756 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Sep 9 04:01:19.959257 systemd-logind[1485]: Removed session 8. Sep 9 04:01:20.128336 systemd[1]: Started sshd@6-10.230.58.214:22-147.75.109.163:55586.service - OpenSSH per-connection server daemon (147.75.109.163:55586). Sep 9 04:01:21.155887 sshd[1714]: Accepted publickey for core from 147.75.109.163 port 55586 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:21.158038 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:21.165434 systemd-logind[1485]: New session 9 of user core. Sep 9 04:01:21.180348 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 04:01:21.711826 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 04:01:21.712345 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:01:21.730342 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 9 04:01:21.897457 sshd[1714]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:21.903124 systemd[1]: sshd@6-10.230.58.214:22-147.75.109.163:55586.service: Deactivated successfully. Sep 9 04:01:21.905304 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 04:01:21.906242 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Sep 9 04:01:21.908551 systemd-logind[1485]: Removed session 9. Sep 9 04:01:22.046471 systemd[1]: Started sshd@7-10.230.58.214:22-147.75.109.163:55602.service - OpenSSH per-connection server daemon (147.75.109.163:55602). Sep 9 04:01:22.226144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 04:01:22.237354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:22.533149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:22.545440 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:22.623627 kubelet[1731]: E0909 04:01:22.623531 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:22.626781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:22.627081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:22.933706 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 55602 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:22.935911 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:22.943284 systemd-logind[1485]: New session 10 of user core. Sep 9 04:01:22.950230 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 04:01:23.414605 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 04:01:23.415094 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:01:23.420812 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 9 04:01:23.429126 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 04:01:23.430063 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:01:23.447310 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 04:01:23.466839 auditctl[1744]: No rules Sep 9 04:01:23.466561 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:01:23.467480 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 04:01:23.482560 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 04:01:23.520947 augenrules[1762]: No rules Sep 9 04:01:23.522590 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 04:01:23.524169 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 9 04:01:23.668292 sshd[1722]: pam_unix(sshd:session): session closed for user core Sep 9 04:01:23.673095 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Sep 9 04:01:23.673774 systemd[1]: sshd@7-10.230.58.214:22-147.75.109.163:55602.service: Deactivated successfully. Sep 9 04:01:23.676563 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 04:01:23.679459 systemd-logind[1485]: Removed session 10. Sep 9 04:01:23.849661 systemd[1]: Started sshd@8-10.230.58.214:22-147.75.109.163:55606.service - OpenSSH per-connection server daemon (147.75.109.163:55606). Sep 9 04:01:24.777150 sshd[1770]: Accepted publickey for core from 147.75.109.163 port 55606 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:01:24.779292 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:01:24.786217 systemd-logind[1485]: New session 11 of user core. Sep 9 04:01:24.793162 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 04:01:25.277583 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 04:01:25.278727 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:01:26.071316 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 04:01:26.084575 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 04:01:26.906893 dockerd[1790]: time="2025-09-09T04:01:26.906744224Z" level=info msg="Starting up" Sep 9 04:01:27.124080 dockerd[1790]: time="2025-09-09T04:01:27.123727833Z" level=info msg="Loading containers: start." Sep 9 04:01:27.275019 kernel: Initializing XFRM netlink socket Sep 9 04:01:27.314496 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Sep 9 04:01:27.385808 systemd-networkd[1432]: docker0: Link UP Sep 9 04:01:27.407059 dockerd[1790]: time="2025-09-09T04:01:27.407010232Z" level=info msg="Loading containers: done." Sep 9 04:01:27.430588 dockerd[1790]: time="2025-09-09T04:01:27.429797572Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 04:01:27.430588 dockerd[1790]: time="2025-09-09T04:01:27.429971342Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 04:01:27.430588 dockerd[1790]: time="2025-09-09T04:01:27.430165170Z" level=info msg="Daemon has completed initialization" Sep 9 04:01:27.483426 dockerd[1790]: time="2025-09-09T04:01:27.483178017Z" level=info msg="API listen on /run/docker.sock" Sep 9 04:01:27.484091 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 04:01:28.951880 systemd-resolved[1386]: Clock change detected. Flushing caches. Sep 9 04:01:28.952382 systemd-timesyncd[1414]: Contacted time server [2a02:6b67:d551:8f04::]:123 (2.flatcar.pool.ntp.org). Sep 9 04:01:28.952503 systemd-timesyncd[1414]: Initial clock synchronization to Tue 2025-09-09 04:01:28.951774 UTC. Sep 9 04:01:29.832597 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 04:01:30.177275 containerd[1511]: time="2025-09-09T04:01:30.176196203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 04:01:31.535069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695569405.mount: Deactivated successfully. Sep 9 04:01:33.843303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 04:01:33.855710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:34.306613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:34.310000 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:34.462074 kubelet[1997]: E0909 04:01:34.461636 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:34.464691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:34.464959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:35.140909 containerd[1511]: time="2025-09-09T04:01:35.140244305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:35.140909 containerd[1511]: time="2025-09-09T04:01:35.140842993Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079639" Sep 9 04:01:35.144599 containerd[1511]: time="2025-09-09T04:01:35.143686002Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:35.149630 containerd[1511]: time="2025-09-09T04:01:35.149588428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:35.152395 containerd[1511]: time="2025-09-09T04:01:35.152337231Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 4.975984822s" Sep 9 04:01:35.153277 containerd[1511]: time="2025-09-09T04:01:35.153212883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 04:01:35.157098 containerd[1511]: time="2025-09-09T04:01:35.156706536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 04:01:39.028511 containerd[1511]: time="2025-09-09T04:01:39.028116896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:39.044185 containerd[1511]: time="2025-09-09T04:01:39.044029560Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714689" Sep 9 04:01:39.045786 containerd[1511]: time="2025-09-09T04:01:39.045710161Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:39.053385 containerd[1511]: time="2025-09-09T04:01:39.051517989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:39.053900 containerd[1511]: time="2025-09-09T04:01:39.053853265Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.897038455s" Sep 9 04:01:39.054112 containerd[1511]: time="2025-09-09T04:01:39.054082367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 04:01:39.056331 containerd[1511]: time="2025-09-09T04:01:39.056299144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 04:01:41.624897 containerd[1511]: time="2025-09-09T04:01:41.624709174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:41.628044 containerd[1511]: time="2025-09-09T04:01:41.627521107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782435" Sep 9 04:01:41.630116 containerd[1511]: time="2025-09-09T04:01:41.629993157Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:41.637787 containerd[1511]: time="2025-09-09T04:01:41.637718673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:41.642941 containerd[1511]: time="2025-09-09T04:01:41.642429040Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.586079095s" Sep 9 04:01:41.642941 containerd[1511]: time="2025-09-09T04:01:41.642490572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 04:01:41.644749 containerd[1511]: time="2025-09-09T04:01:41.644694995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 04:01:43.922195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928018885.mount: Deactivated successfully. Sep 9 04:01:44.155780 update_engine[1489]: I20250909 04:01:44.155483 1489 update_attempter.cc:509] Updating boot flags... Sep 9 04:01:44.226719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2023) Sep 9 04:01:44.340419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2025) Sep 9 04:01:44.448393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2025) Sep 9 04:01:44.479209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 04:01:44.504977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:44.916573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:44.932636 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:45.071808 kubelet[2042]: E0909 04:01:45.071656 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:45.077981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:45.078306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:45.566335 containerd[1511]: time="2025-09-09T04:01:45.566132463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:45.569392 containerd[1511]: time="2025-09-09T04:01:45.567728756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384263" Sep 9 04:01:45.571063 containerd[1511]: time="2025-09-09T04:01:45.570995244Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:45.576629 containerd[1511]: time="2025-09-09T04:01:45.576492378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:45.578123 containerd[1511]: time="2025-09-09T04:01:45.578071798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 3.93332311s" Sep 9 04:01:45.578379 containerd[1511]: time="2025-09-09T04:01:45.578337209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 04:01:45.581778 containerd[1511]: time="2025-09-09T04:01:45.581744698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 04:01:46.351139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748406911.mount: Deactivated successfully. Sep 9 04:01:48.279936 containerd[1511]: time="2025-09-09T04:01:48.279677654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.282788 containerd[1511]: time="2025-09-09T04:01:48.282308810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 9 04:01:48.283819 containerd[1511]: time="2025-09-09T04:01:48.283767742Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.290883 containerd[1511]: time="2025-09-09T04:01:48.290827994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.293224 containerd[1511]: time="2025-09-09T04:01:48.293176204Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.711077222s" Sep 9 04:01:48.293346 containerd[1511]: time="2025-09-09T04:01:48.293251210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 04:01:48.298601 containerd[1511]: time="2025-09-09T04:01:48.298559590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 04:01:48.958449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609283642.mount: Deactivated successfully. Sep 9 04:01:48.966066 containerd[1511]: time="2025-09-09T04:01:48.965998797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.968349 containerd[1511]: time="2025-09-09T04:01:48.968286079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 9 04:01:48.969752 containerd[1511]: time="2025-09-09T04:01:48.969684341Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.973724 containerd[1511]: time="2025-09-09T04:01:48.973656583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:48.975675 containerd[1511]: time="2025-09-09T04:01:48.974791283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 676.090564ms" Sep 9 04:01:48.975675 containerd[1511]: time="2025-09-09T04:01:48.974838643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 04:01:48.976475 containerd[1511]: time="2025-09-09T04:01:48.976405645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 04:01:49.726899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394718634.mount: Deactivated successfully. Sep 9 04:01:53.271415 containerd[1511]: time="2025-09-09T04:01:53.270255848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:53.274685 containerd[1511]: time="2025-09-09T04:01:53.274605262Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 9 04:01:53.277158 containerd[1511]: time="2025-09-09T04:01:53.276438073Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:53.281812 containerd[1511]: time="2025-09-09T04:01:53.281744185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:01:53.284094 containerd[1511]: time="2025-09-09T04:01:53.284048984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.307333281s" Sep 9 04:01:53.284261 containerd[1511]: time="2025-09-09T04:01:53.284230362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 04:01:55.093690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 04:01:55.106463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:55.478615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:55.482014 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:01:55.565137 kubelet[2189]: E0909 04:01:55.564652 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:01:55.569250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:01:55.570173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:01:57.825886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:57.836787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:57.883633 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-11.scope)... Sep 9 04:01:57.883687 systemd[1]: Reloading... Sep 9 04:01:58.051421 zram_generator::config[2240]: No configuration found. Sep 9 04:01:58.266960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 04:01:58.381051 systemd[1]: Reloading finished in 496 ms. Sep 9 04:01:58.466112 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 04:01:58.466542 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 04:01:58.466963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:58.471754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:01:58.647564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:01:58.658977 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:01:58.733059 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:01:58.733059 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 04:01:58.733059 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:01:58.737640 kubelet[2309]: I0909 04:01:58.737039 2309 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:01:59.931031 kubelet[2309]: I0909 04:01:59.930950 2309 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 04:01:59.931031 kubelet[2309]: I0909 04:01:59.931010 2309 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:01:59.931908 kubelet[2309]: I0909 04:01:59.931474 2309 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 04:01:59.972262 kubelet[2309]: I0909 04:01:59.972090 2309 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:01:59.974309 kubelet[2309]: E0909 04:01:59.973383 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.58.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:01:59.988621 kubelet[2309]: E0909 04:01:59.988454 2309 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 04:01:59.988621 kubelet[2309]: I0909 04:01:59.988502 2309 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 04:02:00.000552 kubelet[2309]: I0909 04:02:00.000517 2309 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:02:00.003547 kubelet[2309]: I0909 04:02:00.003506 2309 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 04:02:00.003915 kubelet[2309]: I0909 04:02:00.003850 2309 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:02:00.004289 kubelet[2309]: I0909 04:02:00.003918 2309 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gbnqu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:02:00.004608 kubelet[2309]: I0909 04:02:00.004337 2309 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:02:00.004608 kubelet[2309]: I0909 04:02:00.004356 2309 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 04:02:00.004608 kubelet[2309]: I0909 04:02:00.004588 2309 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:02:00.008179 kubelet[2309]: I0909 04:02:00.007802 2309 kubelet.go:408] "Attempting to sync node with API server" Sep 9 04:02:00.008179 kubelet[2309]: I0909 04:02:00.007850 2309 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:02:00.008179 kubelet[2309]: I0909 04:02:00.007944 2309 kubelet.go:314] "Adding apiserver pod source" Sep 9 04:02:00.008179 kubelet[2309]: I0909 04:02:00.008010 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:02:00.013379 kubelet[2309]: W0909 04:02:00.013291 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:00.013856 kubelet[2309]: E0909 04:02:00.013586 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:00.014893 kubelet[2309]: I0909 04:02:00.014863 2309 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 04:02:00.018733 kubelet[2309]: I0909 04:02:00.018706 2309 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:02:00.019734 kubelet[2309]: W0909 04:02:00.019709 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 04:02:00.023612 kubelet[2309]: W0909 04:02:00.022637 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:00.023612 kubelet[2309]: E0909 04:02:00.022707 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:00.025668 kubelet[2309]: I0909 04:02:00.025646 2309 server.go:1274] "Started kubelet" Sep 9 04:02:00.026063 kubelet[2309]: I0909 04:02:00.025999 2309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:02:00.028201 kubelet[2309]: I0909 04:02:00.027826 2309 server.go:449] "Adding debug handlers to kubelet server" Sep 9 04:02:00.031974 kubelet[2309]: I0909 04:02:00.031934 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:02:00.033142 kubelet[2309]: I0909 04:02:00.032499 2309 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:02:00.037505 kubelet[2309]: I0909 04:02:00.035109 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:02:00.046651 kubelet[2309]: I0909 04:02:00.046600 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:02:00.050439 kubelet[2309]: E0909 04:02:00.033325 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.58.214:6443/api/v1/namespaces/default/events\": dial tcp 10.230.58.214:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gbnqu.gb1.brightbox.com.1863815dedc3f2eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gbnqu.gb1.brightbox.com,UID:srv-gbnqu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gbnqu.gb1.brightbox.com,},FirstTimestamp:2025-09-09 04:02:00.025608939 +0000 UTC m=+1.353327330,LastTimestamp:2025-09-09 04:02:00.025608939 +0000 UTC m=+1.353327330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gbnqu.gb1.brightbox.com,}" Sep 9 04:02:00.051387 kubelet[2309]: E0909 04:02:00.050835 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:00.051387 kubelet[2309]: I0909 04:02:00.050949 2309 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 04:02:00.051387 kubelet[2309]: I0909 04:02:00.051259 2309 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 04:02:00.052904 kubelet[2309]: I0909 04:02:00.052436 2309 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:02:00.053107 kubelet[2309]: W0909 04:02:00.053057 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:00.053282 kubelet[2309]: E0909 04:02:00.053228 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:00.053537 kubelet[2309]: E0909 04:02:00.053500 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gbnqu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.214:6443: connect: connection refused" interval="200ms" Sep 9 04:02:00.053970 kubelet[2309]: I0909 04:02:00.053944 2309 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:02:00.054210 kubelet[2309]: I0909 04:02:00.054184 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:02:00.059384 kubelet[2309]: I0909 04:02:00.057207 2309 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:02:00.070217 kubelet[2309]: I0909 04:02:00.070149 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:02:00.071894 kubelet[2309]: I0909 04:02:00.071846 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:02:00.071979 kubelet[2309]: I0909 04:02:00.071908 2309 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 04:02:00.071979 kubelet[2309]: I0909 04:02:00.071954 2309 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 04:02:00.072095 kubelet[2309]: E0909 04:02:00.072046 2309 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:02:00.088848 kubelet[2309]: W0909 04:02:00.088720 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:00.089442 kubelet[2309]: E0909 04:02:00.089410 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:00.090303 kubelet[2309]: E0909 04:02:00.090278 2309 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:02:00.110261 kubelet[2309]: I0909 04:02:00.110228 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 04:02:00.110261 kubelet[2309]: I0909 04:02:00.110255 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 04:02:00.110464 kubelet[2309]: I0909 04:02:00.110286 2309 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:02:00.112652 kubelet[2309]: I0909 04:02:00.112609 2309 policy_none.go:49] "None policy: Start" Sep 9 04:02:00.113595 kubelet[2309]: I0909 04:02:00.113567 2309 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 04:02:00.113687 kubelet[2309]: I0909 04:02:00.113604 2309 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:02:00.128605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 04:02:00.147111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 04:02:00.153039 kubelet[2309]: E0909 04:02:00.151065 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:00.156951 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 04:02:00.172418 kubelet[2309]: E0909 04:02:00.172383 2309 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 04:02:00.175298 kubelet[2309]: I0909 04:02:00.175262 2309 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:02:00.176120 kubelet[2309]: I0909 04:02:00.175668 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:02:00.176120 kubelet[2309]: I0909 04:02:00.175969 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:02:00.177605 kubelet[2309]: I0909 04:02:00.177231 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:02:00.214838 kubelet[2309]: E0909 04:02:00.181687 2309 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:00.255214 kubelet[2309]: E0909 04:02:00.255157 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gbnqu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.214:6443: connect: connection refused" interval="400ms" Sep 9 04:02:00.281931 kubelet[2309]: I0909 04:02:00.281845 2309 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.282355 kubelet[2309]: E0909 04:02:00.282320 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.214:6443/api/v1/nodes\": dial tcp 10.230.58.214:6443: connect: connection refused" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.390930 systemd[1]: Created slice kubepods-burstable-poda964b4bb118f7d36693490a85a762874.slice - libcontainer container kubepods-burstable-poda964b4bb118f7d36693490a85a762874.slice. Sep 9 04:02:00.426623 systemd[1]: Created slice kubepods-burstable-pod03a4ae0217bba56c9f83b3c3e0675308.slice - libcontainer container kubepods-burstable-pod03a4ae0217bba56c9f83b3c3e0675308.slice. Sep 9 04:02:00.453570 systemd[1]: Created slice kubepods-burstable-pod09cce2d91d614128a5d635324620ff18.slice - libcontainer container kubepods-burstable-pod09cce2d91d614128a5d635324620ff18.slice. Sep 9 04:02:00.454809 kubelet[2309]: I0909 04:02:00.453989 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-k8s-certs\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.454809 kubelet[2309]: I0909 04:02:00.454039 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.454809 kubelet[2309]: I0909 04:02:00.454081 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-flexvolume-dir\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.454809 kubelet[2309]: I0909 04:02:00.454110 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09cce2d91d614128a5d635324620ff18-kubeconfig\") pod \"kube-scheduler-srv-gbnqu.gb1.brightbox.com\" (UID: \"09cce2d91d614128a5d635324620ff18\") " pod="kube-system/kube-scheduler-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.454809 kubelet[2309]: I0909 04:02:00.454137 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-ca-certs\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.455173 kubelet[2309]: I0909 04:02:00.454163 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-ca-certs\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.455173 kubelet[2309]: I0909 04:02:00.454191 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-k8s-certs\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.455173 kubelet[2309]: I0909 04:02:00.454220 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-kubeconfig\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.455173 kubelet[2309]: I0909 04:02:00.454248 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.487163 kubelet[2309]: I0909 04:02:00.486047 2309 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.487163 kubelet[2309]: E0909 04:02:00.486455 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.214:6443/api/v1/nodes\": dial tcp 10.230.58.214:6443: connect: connection refused" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.655957 kubelet[2309]: E0909 04:02:00.655873 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gbnqu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.214:6443: connect: connection refused" interval="800ms" Sep 9 04:02:00.753522 containerd[1511]: time="2025-09-09T04:02:00.752761914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gbnqu.gb1.brightbox.com,Uid:03a4ae0217bba56c9f83b3c3e0675308,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:00.753522 containerd[1511]: time="2025-09-09T04:02:00.752789503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gbnqu.gb1.brightbox.com,Uid:a964b4bb118f7d36693490a85a762874,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:00.771015 containerd[1511]: time="2025-09-09T04:02:00.770956092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gbnqu.gb1.brightbox.com,Uid:09cce2d91d614128a5d635324620ff18,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:00.890525 kubelet[2309]: I0909 04:02:00.890487 2309 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.890985 kubelet[2309]: E0909 04:02:00.890928 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.214:6443/api/v1/nodes\": dial tcp 10.230.58.214:6443: connect: connection refused" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:00.978391 kubelet[2309]: W0909 04:02:00.978239 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:00.978391 kubelet[2309]: E0909 04:02:00.978345 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:01.055768 kubelet[2309]: W0909 04:02:01.055561 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:01.055768 kubelet[2309]: E0909 04:02:01.055721 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:01.189841 kubelet[2309]: W0909 04:02:01.189577 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:01.189841 kubelet[2309]: E0909 04:02:01.189672 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:01.457150 kubelet[2309]: E0909 04:02:01.457083 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gbnqu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.214:6443: connect: connection refused" interval="1.6s" Sep 9 04:02:01.487577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539342456.mount: Deactivated successfully. Sep 9 04:02:01.502161 containerd[1511]: time="2025-09-09T04:02:01.502086899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:02:01.504070 containerd[1511]: time="2025-09-09T04:02:01.503964114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:02:01.505501 containerd[1511]: time="2025-09-09T04:02:01.505430476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 04:02:01.508026 containerd[1511]: time="2025-09-09T04:02:01.506524775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:02:01.508026 containerd[1511]: time="2025-09-09T04:02:01.507965272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 9 04:02:01.508527 containerd[1511]: time="2025-09-09T04:02:01.508487444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 04:02:01.509006 containerd[1511]: time="2025-09-09T04:02:01.508952994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:02:01.515189 containerd[1511]: time="2025-09-09T04:02:01.515141060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:02:01.516212 kubelet[2309]: W0909 04:02:01.516137 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:01.516212 kubelet[2309]: E0909 04:02:01.516206 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:01.518125 containerd[1511]: time="2025-09-09T04:02:01.517639879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 746.5679ms" Sep 9 04:02:01.523688 containerd[1511]: time="2025-09-09T04:02:01.523589759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 770.491557ms" Sep 9 04:02:01.526573 containerd[1511]: time="2025-09-09T04:02:01.526433445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 773.319081ms" Sep 9 04:02:01.694587 kubelet[2309]: I0909 04:02:01.694540 2309 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:01.695677 kubelet[2309]: E0909 04:02:01.695617 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.214:6443/api/v1/nodes\": dial tcp 10.230.58.214:6443: connect: connection refused" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:01.740942 containerd[1511]: time="2025-09-09T04:02:01.740643271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:01.744982 containerd[1511]: time="2025-09-09T04:02:01.741295380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:01.744982 containerd[1511]: time="2025-09-09T04:02:01.741371321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.744982 containerd[1511]: time="2025-09-09T04:02:01.741488577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.758787 containerd[1511]: time="2025-09-09T04:02:01.758416488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:01.758787 containerd[1511]: time="2025-09-09T04:02:01.758737525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:01.759810 containerd[1511]: time="2025-09-09T04:02:01.758769648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.760553 containerd[1511]: time="2025-09-09T04:02:01.760193766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.764458 containerd[1511]: time="2025-09-09T04:02:01.763341435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:01.764458 containerd[1511]: time="2025-09-09T04:02:01.764079362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:01.764458 containerd[1511]: time="2025-09-09T04:02:01.764141795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.764458 containerd[1511]: time="2025-09-09T04:02:01.764286365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:01.802617 systemd[1]: Started cri-containerd-b8908c18dd1ab9eb697739829e24ad03fdaa406a1a191ea18ec655a8830065a3.scope - libcontainer container b8908c18dd1ab9eb697739829e24ad03fdaa406a1a191ea18ec655a8830065a3. Sep 9 04:02:01.818798 systemd[1]: Started cri-containerd-f629bd3b912f3472fb43052142e2577eb780716b1984b7ebf8e8be6b26be94f8.scope - libcontainer container f629bd3b912f3472fb43052142e2577eb780716b1984b7ebf8e8be6b26be94f8. Sep 9 04:02:01.859596 systemd[1]: Started cri-containerd-a3a531c9991e9262735f56d6ea5ecc59b79675d111c5036d80623c90b6dac525.scope - libcontainer container a3a531c9991e9262735f56d6ea5ecc59b79675d111c5036d80623c90b6dac525. Sep 9 04:02:01.938100 containerd[1511]: time="2025-09-09T04:02:01.937877905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gbnqu.gb1.brightbox.com,Uid:03a4ae0217bba56c9f83b3c3e0675308,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8908c18dd1ab9eb697739829e24ad03fdaa406a1a191ea18ec655a8830065a3\"" Sep 9 04:02:01.953965 containerd[1511]: time="2025-09-09T04:02:01.953774245Z" level=info msg="CreateContainer within sandbox \"b8908c18dd1ab9eb697739829e24ad03fdaa406a1a191ea18ec655a8830065a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 04:02:01.975688 containerd[1511]: time="2025-09-09T04:02:01.975234182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gbnqu.gb1.brightbox.com,Uid:a964b4bb118f7d36693490a85a762874,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a531c9991e9262735f56d6ea5ecc59b79675d111c5036d80623c90b6dac525\"" Sep 9 04:02:01.982331 containerd[1511]: time="2025-09-09T04:02:01.982246760Z" level=info msg="CreateContainer within sandbox \"a3a531c9991e9262735f56d6ea5ecc59b79675d111c5036d80623c90b6dac525\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 04:02:01.989939 containerd[1511]: time="2025-09-09T04:02:01.989715337Z" level=info msg="CreateContainer within sandbox \"b8908c18dd1ab9eb697739829e24ad03fdaa406a1a191ea18ec655a8830065a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b1c2fb704e5cdc04934bbba9f09e6691bb83f67e0a78a4d931f54da472d6bb0\"" Sep 9 04:02:01.992176 containerd[1511]: time="2025-09-09T04:02:01.990720687Z" level=info msg="StartContainer for \"5b1c2fb704e5cdc04934bbba9f09e6691bb83f67e0a78a4d931f54da472d6bb0\"" Sep 9 04:02:02.000331 containerd[1511]: time="2025-09-09T04:02:02.000272844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gbnqu.gb1.brightbox.com,Uid:09cce2d91d614128a5d635324620ff18,Namespace:kube-system,Attempt:0,} returns sandbox id \"f629bd3b912f3472fb43052142e2577eb780716b1984b7ebf8e8be6b26be94f8\"" Sep 9 04:02:02.004798 containerd[1511]: time="2025-09-09T04:02:02.004626304Z" level=info msg="CreateContainer within sandbox \"f629bd3b912f3472fb43052142e2577eb780716b1984b7ebf8e8be6b26be94f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 04:02:02.012635 containerd[1511]: time="2025-09-09T04:02:02.012586707Z" level=info msg="CreateContainer within sandbox \"a3a531c9991e9262735f56d6ea5ecc59b79675d111c5036d80623c90b6dac525\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fc009e14a344e5e69c0b70be0b81e2841ad79726ba5ca76c7fdd4c397ba88943\"" Sep 9 04:02:02.017565 containerd[1511]: time="2025-09-09T04:02:02.017529909Z" level=info msg="StartContainer for \"fc009e14a344e5e69c0b70be0b81e2841ad79726ba5ca76c7fdd4c397ba88943\"" Sep 9 04:02:02.028113 kubelet[2309]: E0909 04:02:02.028043 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.58.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:02.059312 containerd[1511]: time="2025-09-09T04:02:02.056438890Z" level=info msg="CreateContainer within sandbox \"f629bd3b912f3472fb43052142e2577eb780716b1984b7ebf8e8be6b26be94f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"062052c105f5ab41a59a3f617bbc09ba6d96fe42abfd67bfda6b5ff34c81a8f4\"" Sep 9 04:02:02.063452 containerd[1511]: time="2025-09-09T04:02:02.063417773Z" level=info msg="StartContainer for \"062052c105f5ab41a59a3f617bbc09ba6d96fe42abfd67bfda6b5ff34c81a8f4\"" Sep 9 04:02:02.065562 systemd[1]: Started cri-containerd-5b1c2fb704e5cdc04934bbba9f09e6691bb83f67e0a78a4d931f54da472d6bb0.scope - libcontainer container 5b1c2fb704e5cdc04934bbba9f09e6691bb83f67e0a78a4d931f54da472d6bb0. Sep 9 04:02:02.131029 systemd[1]: Started cri-containerd-fc009e14a344e5e69c0b70be0b81e2841ad79726ba5ca76c7fdd4c397ba88943.scope - libcontainer container fc009e14a344e5e69c0b70be0b81e2841ad79726ba5ca76c7fdd4c397ba88943. Sep 9 04:02:02.148619 systemd[1]: Started cri-containerd-062052c105f5ab41a59a3f617bbc09ba6d96fe42abfd67bfda6b5ff34c81a8f4.scope - libcontainer container 062052c105f5ab41a59a3f617bbc09ba6d96fe42abfd67bfda6b5ff34c81a8f4. Sep 9 04:02:02.176812 containerd[1511]: time="2025-09-09T04:02:02.176727838Z" level=info msg="StartContainer for \"5b1c2fb704e5cdc04934bbba9f09e6691bb83f67e0a78a4d931f54da472d6bb0\" returns successfully" Sep 9 04:02:02.247490 containerd[1511]: time="2025-09-09T04:02:02.247045554Z" level=info msg="StartContainer for \"fc009e14a344e5e69c0b70be0b81e2841ad79726ba5ca76c7fdd4c397ba88943\" returns successfully" Sep 9 04:02:02.278888 containerd[1511]: time="2025-09-09T04:02:02.278683068Z" level=info msg="StartContainer for \"062052c105f5ab41a59a3f617bbc09ba6d96fe42abfd67bfda6b5ff34c81a8f4\" returns successfully" Sep 9 04:02:02.603864 kubelet[2309]: W0909 04:02:02.603330 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.214:6443: connect: connection refused Sep 9 04:02:02.603864 kubelet[2309]: E0909 04:02:02.603795 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gbnqu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.214:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:02:03.303186 kubelet[2309]: I0909 04:02:03.302292 2309 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:05.145114 kubelet[2309]: E0909 04:02:05.144633 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gbnqu.gb1.brightbox.com\" not found" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:05.220732 kubelet[2309]: I0909 04:02:05.220665 2309 kubelet_node_status.go:75] "Successfully registered node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:05.220955 kubelet[2309]: E0909 04:02:05.220743 2309 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-gbnqu.gb1.brightbox.com\": node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.242164 kubelet[2309]: E0909 04:02:05.241877 2309 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-gbnqu.gb1.brightbox.com.1863815dedc3f2eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gbnqu.gb1.brightbox.com,UID:srv-gbnqu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gbnqu.gb1.brightbox.com,},FirstTimestamp:2025-09-09 04:02:00.025608939 +0000 UTC m=+1.353327330,LastTimestamp:2025-09-09 04:02:00.025608939 +0000 UTC m=+1.353327330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gbnqu.gb1.brightbox.com,}" Sep 9 04:02:05.249383 kubelet[2309]: E0909 04:02:05.249305 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.350138 kubelet[2309]: E0909 04:02:05.350082 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.450765 kubelet[2309]: E0909 04:02:05.450326 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.551142 kubelet[2309]: E0909 04:02:05.551035 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.651543 kubelet[2309]: E0909 04:02:05.651464 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.752338 kubelet[2309]: E0909 04:02:05.752111 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.852849 kubelet[2309]: E0909 04:02:05.852772 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:05.953613 kubelet[2309]: E0909 04:02:05.953538 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.054700 kubelet[2309]: E0909 04:02:06.054334 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.154835 kubelet[2309]: E0909 04:02:06.154762 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.255941 kubelet[2309]: E0909 04:02:06.255882 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.356805 kubelet[2309]: E0909 04:02:06.356708 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.457886 kubelet[2309]: E0909 04:02:06.457797 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.558407 kubelet[2309]: E0909 04:02:06.558203 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.659152 kubelet[2309]: E0909 04:02:06.658934 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.759922 kubelet[2309]: E0909 04:02:06.759857 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.860703 kubelet[2309]: E0909 04:02:06.860621 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:06.962170 kubelet[2309]: E0909 04:02:06.961769 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.062230 kubelet[2309]: E0909 04:02:07.062085 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.163358 kubelet[2309]: E0909 04:02:07.163282 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.221511 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-11.scope)... Sep 9 04:02:07.222481 systemd[1]: Reloading... Sep 9 04:02:07.264167 kubelet[2309]: E0909 04:02:07.264090 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.365994 kubelet[2309]: E0909 04:02:07.365865 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.367433 zram_generator::config[2624]: No configuration found. Sep 9 04:02:07.466752 kubelet[2309]: E0909 04:02:07.466661 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.555452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 04:02:07.567513 kubelet[2309]: E0909 04:02:07.567436 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.668983 kubelet[2309]: E0909 04:02:07.668888 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-gbnqu.gb1.brightbox.com\" not found" Sep 9 04:02:07.689018 systemd[1]: Reloading finished in 465 ms. Sep 9 04:02:07.756813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:02:07.773502 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 04:02:07.774028 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:02:07.774175 systemd[1]: kubelet.service: Consumed 1.937s CPU time, 130.0M memory peak, 0B memory swap peak. Sep 9 04:02:07.786081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:02:08.162236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:02:08.172258 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:02:08.291710 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:02:08.291710 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 04:02:08.292698 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:02:08.296512 kubelet[2688]: I0909 04:02:08.296166 2688 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:02:08.315944 kubelet[2688]: I0909 04:02:08.315890 2688 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 04:02:08.315944 kubelet[2688]: I0909 04:02:08.315934 2688 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:02:08.317839 kubelet[2688]: I0909 04:02:08.317777 2688 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 04:02:08.326028 kubelet[2688]: I0909 04:02:08.325240 2688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 04:02:08.357176 kubelet[2688]: I0909 04:02:08.356846 2688 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:02:08.384303 kubelet[2688]: E0909 04:02:08.384224 2688 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 04:02:08.384303 kubelet[2688]: I0909 04:02:08.384299 2688 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 04:02:08.401146 kubelet[2688]: I0909 04:02:08.400734 2688 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:02:08.402617 kubelet[2688]: I0909 04:02:08.401760 2688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 04:02:08.402617 kubelet[2688]: I0909 04:02:08.402097 2688 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:02:08.402617 kubelet[2688]: I0909 04:02:08.402136 2688 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gbnqu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:02:08.402617 kubelet[2688]: I0909 04:02:08.402519 2688 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:02:08.402969 kubelet[2688]: I0909 04:02:08.402543 2688 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 04:02:08.402969 kubelet[2688]: I0909 04:02:08.402650 2688 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:02:08.402969 kubelet[2688]: I0909 04:02:08.402894 2688 kubelet.go:408] "Attempting to sync node with API server" Sep 9 04:02:08.402969 kubelet[2688]: I0909 04:02:08.402925 2688 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:02:08.403201 kubelet[2688]: I0909 04:02:08.402991 2688 kubelet.go:314] "Adding apiserver pod source" Sep 9 04:02:08.403201 kubelet[2688]: I0909 04:02:08.403045 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:02:08.410944 kubelet[2688]: I0909 04:02:08.410444 2688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 04:02:08.411145 kubelet[2688]: I0909 04:02:08.411121 2688 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:02:08.413528 kubelet[2688]: I0909 04:02:08.413150 2688 server.go:1274] "Started kubelet" Sep 9 04:02:08.463258 kubelet[2688]: I0909 04:02:08.463196 2688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:02:08.477161 kubelet[2688]: I0909 04:02:08.476815 2688 server.go:449] "Adding debug handlers to kubelet server" Sep 9 04:02:08.487685 kubelet[2688]: I0909 04:02:08.487294 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:02:08.492874 kubelet[2688]: I0909 04:02:08.491960 2688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:02:08.494012 kubelet[2688]: I0909 04:02:08.493541 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:02:08.508483 kubelet[2688]: I0909 04:02:08.508286 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:02:08.514444 kubelet[2688]: I0909 04:02:08.513681 2688 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 04:02:08.516174 kubelet[2688]: I0909 04:02:08.515989 2688 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 04:02:08.517763 kubelet[2688]: I0909 04:02:08.516288 2688 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:02:08.539589 kubelet[2688]: I0909 04:02:08.533428 2688 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:02:08.539589 kubelet[2688]: I0909 04:02:08.533575 2688 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:02:08.548487 kubelet[2688]: E0909 04:02:08.546907 2688 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:02:08.548487 kubelet[2688]: I0909 04:02:08.547271 2688 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:02:08.582406 kubelet[2688]: I0909 04:02:08.580956 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:02:08.590017 kubelet[2688]: I0909 04:02:08.588990 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:02:08.590017 kubelet[2688]: I0909 04:02:08.589159 2688 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 04:02:08.591387 kubelet[2688]: I0909 04:02:08.590933 2688 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 04:02:08.591387 kubelet[2688]: E0909 04:02:08.591140 2688 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.679876 2688 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.679907 2688 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.679949 2688 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.680186 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.680206 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 04:02:08.680520 kubelet[2688]: I0909 04:02:08.680249 2688 policy_none.go:49] "None policy: Start" Sep 9 04:02:08.681467 kubelet[2688]: I0909 04:02:08.681429 2688 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 04:02:08.681543 kubelet[2688]: I0909 04:02:08.681492 2688 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:02:08.681706 kubelet[2688]: I0909 04:02:08.681679 2688 state_mem.go:75] "Updated machine memory state" Sep 9 04:02:08.692063 kubelet[2688]: E0909 04:02:08.692015 2688 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 04:02:08.693110 kubelet[2688]: I0909 04:02:08.692681 2688 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:02:08.693110 kubelet[2688]: I0909 04:02:08.692968 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:02:08.693110 kubelet[2688]: I0909 04:02:08.693001 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:02:08.694740 kubelet[2688]: I0909 04:02:08.694232 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:02:08.845778 kubelet[2688]: I0909 04:02:08.845103 2688 kubelet_node_status.go:72] "Attempting to register node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.865267 kubelet[2688]: I0909 04:02:08.865205 2688 kubelet_node_status.go:111] "Node was previously registered" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.865849 kubelet[2688]: I0909 04:02:08.865381 2688 kubelet_node_status.go:75] "Successfully registered node" node="srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.915146 kubelet[2688]: W0909 04:02:08.913906 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:02:08.915146 kubelet[2688]: W0909 04:02:08.913982 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:02:08.920705 kubelet[2688]: W0909 04:02:08.920517 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:02:08.921985 kubelet[2688]: I0909 04:02:08.921771 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-ca-certs\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.921985 kubelet[2688]: I0909 04:02:08.921828 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-flexvolume-dir\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.921985 kubelet[2688]: I0909 04:02:08.921912 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-k8s-certs\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.921985 kubelet[2688]: I0909 04:02:08.921966 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-kubeconfig\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.922263 kubelet[2688]: I0909 04:02:08.922012 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-ca-certs\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.922263 kubelet[2688]: I0909 04:02:08.922057 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-k8s-certs\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.922263 kubelet[2688]: I0909 04:02:08.922086 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09cce2d91d614128a5d635324620ff18-kubeconfig\") pod \"kube-scheduler-srv-gbnqu.gb1.brightbox.com\" (UID: \"09cce2d91d614128a5d635324620ff18\") " pod="kube-system/kube-scheduler-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.922263 kubelet[2688]: I0909 04:02:08.922119 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a964b4bb118f7d36693490a85a762874-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" (UID: \"a964b4bb118f7d36693490a85a762874\") " pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:08.922263 kubelet[2688]: I0909 04:02:08.922200 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03a4ae0217bba56c9f83b3c3e0675308-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" (UID: \"03a4ae0217bba56c9f83b3c3e0675308\") " pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:09.430312 kubelet[2688]: I0909 04:02:09.430253 2688 apiserver.go:52] "Watching apiserver" Sep 9 04:02:09.516474 kubelet[2688]: I0909 04:02:09.516402 2688 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 04:02:09.656859 kubelet[2688]: W0909 04:02:09.656661 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:02:09.656859 kubelet[2688]: W0909 04:02:09.656910 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:02:09.657631 kubelet[2688]: E0909 04:02:09.657000 2688 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-gbnqu.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:09.657631 kubelet[2688]: E0909 04:02:09.657284 2688 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-gbnqu.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" Sep 9 04:02:09.707914 kubelet[2688]: I0909 04:02:09.706221 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gbnqu.gb1.brightbox.com" podStartSLOduration=1.706178141 podStartE2EDuration="1.706178141s" podCreationTimestamp="2025-09-09 04:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:02:09.690664293 +0000 UTC m=+1.501305216" watchObservedRunningTime="2025-09-09 04:02:09.706178141 +0000 UTC m=+1.516819052" Sep 9 04:02:09.707914 kubelet[2688]: I0909 04:02:09.707753 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gbnqu.gb1.brightbox.com" podStartSLOduration=1.707742643 podStartE2EDuration="1.707742643s" podCreationTimestamp="2025-09-09 04:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:02:09.707349928 +0000 UTC m=+1.517990840" watchObservedRunningTime="2025-09-09 04:02:09.707742643 +0000 UTC m=+1.518383562" Sep 9 04:02:09.722414 kubelet[2688]: I0909 04:02:09.720069 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gbnqu.gb1.brightbox.com" podStartSLOduration=1.7200548310000001 podStartE2EDuration="1.720054831s" podCreationTimestamp="2025-09-09 04:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:02:09.718728809 +0000 UTC m=+1.529369741" watchObservedRunningTime="2025-09-09 04:02:09.720054831 +0000 UTC m=+1.530695745" Sep 9 04:02:12.908721 kubelet[2688]: I0909 04:02:12.908628 2688 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 04:02:12.913101 kubelet[2688]: I0909 04:02:12.912196 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 04:02:12.913200 containerd[1511]: time="2025-09-09T04:02:12.911777294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 04:02:13.822261 systemd[1]: Created slice kubepods-besteffort-podd29f5ec1_1368_4bba_8878_56eef96226e1.slice - libcontainer container kubepods-besteffort-podd29f5ec1_1368_4bba_8878_56eef96226e1.slice. Sep 9 04:02:13.858218 kubelet[2688]: I0909 04:02:13.858158 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d29f5ec1-1368-4bba-8878-56eef96226e1-kube-proxy\") pod \"kube-proxy-fm22l\" (UID: \"d29f5ec1-1368-4bba-8878-56eef96226e1\") " pod="kube-system/kube-proxy-fm22l" Sep 9 04:02:13.858551 kubelet[2688]: I0909 04:02:13.858524 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d29f5ec1-1368-4bba-8878-56eef96226e1-xtables-lock\") pod \"kube-proxy-fm22l\" (UID: \"d29f5ec1-1368-4bba-8878-56eef96226e1\") " pod="kube-system/kube-proxy-fm22l" Sep 9 04:02:13.858707 kubelet[2688]: I0909 04:02:13.858682 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d29f5ec1-1368-4bba-8878-56eef96226e1-lib-modules\") pod \"kube-proxy-fm22l\" (UID: \"d29f5ec1-1368-4bba-8878-56eef96226e1\") " pod="kube-system/kube-proxy-fm22l" Sep 9 04:02:13.858862 kubelet[2688]: I0909 04:02:13.858817 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkx5w\" (UniqueName: \"kubernetes.io/projected/d29f5ec1-1368-4bba-8878-56eef96226e1-kube-api-access-jkx5w\") pod \"kube-proxy-fm22l\" (UID: \"d29f5ec1-1368-4bba-8878-56eef96226e1\") " pod="kube-system/kube-proxy-fm22l" Sep 9 04:02:14.106511 kubelet[2688]: W0909 04:02:14.106295 2688 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-gbnqu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-gbnqu.gb1.brightbox.com' and this object Sep 9 04:02:14.106511 kubelet[2688]: E0909 04:02:14.106442 2688 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-gbnqu.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-gbnqu.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 9 04:02:14.109442 systemd[1]: Created slice kubepods-besteffort-pod982c21f1_11ba_431c_b8a9_64228b2a00d4.slice - libcontainer container kubepods-besteffort-pod982c21f1_11ba_431c_b8a9_64228b2a00d4.slice. Sep 9 04:02:14.137608 containerd[1511]: time="2025-09-09T04:02:14.137500128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm22l,Uid:d29f5ec1-1368-4bba-8878-56eef96226e1,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:14.160552 kubelet[2688]: I0909 04:02:14.160274 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmztq\" (UniqueName: \"kubernetes.io/projected/982c21f1-11ba-431c-b8a9-64228b2a00d4-kube-api-access-gmztq\") pod \"tigera-operator-58fc44c59b-jwghv\" (UID: \"982c21f1-11ba-431c-b8a9-64228b2a00d4\") " pod="tigera-operator/tigera-operator-58fc44c59b-jwghv" Sep 9 04:02:14.160552 kubelet[2688]: I0909 04:02:14.160428 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/982c21f1-11ba-431c-b8a9-64228b2a00d4-var-lib-calico\") pod \"tigera-operator-58fc44c59b-jwghv\" (UID: \"982c21f1-11ba-431c-b8a9-64228b2a00d4\") " pod="tigera-operator/tigera-operator-58fc44c59b-jwghv" Sep 9 04:02:14.218382 containerd[1511]: time="2025-09-09T04:02:14.218153381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:14.219297 containerd[1511]: time="2025-09-09T04:02:14.219215771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:14.219473 containerd[1511]: time="2025-09-09T04:02:14.219301430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:14.219705 containerd[1511]: time="2025-09-09T04:02:14.219524325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:14.257652 systemd[1]: Started cri-containerd-2b29c3c129f46a810ea6aaa72fff61f5bc70c171eeaccdeee028fc528dd5ade4.scope - libcontainer container 2b29c3c129f46a810ea6aaa72fff61f5bc70c171eeaccdeee028fc528dd5ade4. Sep 9 04:02:14.314779 containerd[1511]: time="2025-09-09T04:02:14.314607725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm22l,Uid:d29f5ec1-1368-4bba-8878-56eef96226e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b29c3c129f46a810ea6aaa72fff61f5bc70c171eeaccdeee028fc528dd5ade4\"" Sep 9 04:02:14.324149 containerd[1511]: time="2025-09-09T04:02:14.324038253Z" level=info msg="CreateContainer within sandbox \"2b29c3c129f46a810ea6aaa72fff61f5bc70c171eeaccdeee028fc528dd5ade4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 04:02:14.353073 containerd[1511]: time="2025-09-09T04:02:14.352874929Z" level=info msg="CreateContainer within sandbox \"2b29c3c129f46a810ea6aaa72fff61f5bc70c171eeaccdeee028fc528dd5ade4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e45802ac6877e2ae1fc630fa2c59fe81f10193db5fdef6780abbb673ae4d386\"" Sep 9 04:02:14.354399 containerd[1511]: time="2025-09-09T04:02:14.354073762Z" level=info msg="StartContainer for \"2e45802ac6877e2ae1fc630fa2c59fe81f10193db5fdef6780abbb673ae4d386\"" Sep 9 04:02:14.395585 systemd[1]: Started cri-containerd-2e45802ac6877e2ae1fc630fa2c59fe81f10193db5fdef6780abbb673ae4d386.scope - libcontainer container 2e45802ac6877e2ae1fc630fa2c59fe81f10193db5fdef6780abbb673ae4d386. Sep 9 04:02:14.424181 containerd[1511]: time="2025-09-09T04:02:14.424107491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jwghv,Uid:982c21f1-11ba-431c-b8a9-64228b2a00d4,Namespace:tigera-operator,Attempt:0,}" Sep 9 04:02:14.458688 containerd[1511]: time="2025-09-09T04:02:14.457901951Z" level=info msg="StartContainer for \"2e45802ac6877e2ae1fc630fa2c59fe81f10193db5fdef6780abbb673ae4d386\" returns successfully" Sep 9 04:02:14.486399 containerd[1511]: time="2025-09-09T04:02:14.485914212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:14.486399 containerd[1511]: time="2025-09-09T04:02:14.486076624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:14.486399 containerd[1511]: time="2025-09-09T04:02:14.486107706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:14.486399 containerd[1511]: time="2025-09-09T04:02:14.486284677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:14.520572 systemd[1]: Started cri-containerd-ae241a79aba9311dc22f50ef3b118ec13a03e3e89753a65c51a8b23e1e5a1da0.scope - libcontainer container ae241a79aba9311dc22f50ef3b118ec13a03e3e89753a65c51a8b23e1e5a1da0. Sep 9 04:02:14.597227 containerd[1511]: time="2025-09-09T04:02:14.597153317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jwghv,Uid:982c21f1-11ba-431c-b8a9-64228b2a00d4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae241a79aba9311dc22f50ef3b118ec13a03e3e89753a65c51a8b23e1e5a1da0\"" Sep 9 04:02:14.603078 containerd[1511]: time="2025-09-09T04:02:14.603041486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 04:02:14.679917 kubelet[2688]: I0909 04:02:14.678983 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fm22l" podStartSLOduration=1.678928303 podStartE2EDuration="1.678928303s" podCreationTimestamp="2025-09-09 04:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:02:14.678448604 +0000 UTC m=+6.489089523" watchObservedRunningTime="2025-09-09 04:02:14.678928303 +0000 UTC m=+6.489569213" Sep 9 04:02:16.719517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883847854.mount: Deactivated successfully. Sep 9 04:02:17.929594 containerd[1511]: time="2025-09-09T04:02:17.929319422Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:17.931956 containerd[1511]: time="2025-09-09T04:02:17.931849402Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 04:02:17.933395 containerd[1511]: time="2025-09-09T04:02:17.933186744Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:17.937897 containerd[1511]: time="2025-09-09T04:02:17.937833978Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:17.939511 containerd[1511]: time="2025-09-09T04:02:17.939358640Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.336249963s" Sep 9 04:02:17.939682 containerd[1511]: time="2025-09-09T04:02:17.939652384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 04:02:17.944892 containerd[1511]: time="2025-09-09T04:02:17.944856441Z" level=info msg="CreateContainer within sandbox \"ae241a79aba9311dc22f50ef3b118ec13a03e3e89753a65c51a8b23e1e5a1da0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 04:02:17.983468 containerd[1511]: time="2025-09-09T04:02:17.983409406Z" level=info msg="CreateContainer within sandbox \"ae241a79aba9311dc22f50ef3b118ec13a03e3e89753a65c51a8b23e1e5a1da0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a3bf22cc0ced1df8d4e731b3dbfdf9868c1f67f2086857601cb1e3c285ffbc2\"" Sep 9 04:02:17.985666 containerd[1511]: time="2025-09-09T04:02:17.985625205Z" level=info msg="StartContainer for \"6a3bf22cc0ced1df8d4e731b3dbfdf9868c1f67f2086857601cb1e3c285ffbc2\"" Sep 9 04:02:18.056818 systemd[1]: Started cri-containerd-6a3bf22cc0ced1df8d4e731b3dbfdf9868c1f67f2086857601cb1e3c285ffbc2.scope - libcontainer container 6a3bf22cc0ced1df8d4e731b3dbfdf9868c1f67f2086857601cb1e3c285ffbc2. Sep 9 04:02:18.099272 containerd[1511]: time="2025-09-09T04:02:18.099186847Z" level=info msg="StartContainer for \"6a3bf22cc0ced1df8d4e731b3dbfdf9868c1f67f2086857601cb1e3c285ffbc2\" returns successfully" Sep 9 04:02:18.696169 kubelet[2688]: I0909 04:02:18.695967 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-jwghv" podStartSLOduration=1.353601544 podStartE2EDuration="4.695869307s" podCreationTimestamp="2025-09-09 04:02:14 +0000 UTC" firstStartedPulling="2025-09-09 04:02:14.599539597 +0000 UTC m=+6.410180497" lastFinishedPulling="2025-09-09 04:02:17.941807346 +0000 UTC m=+9.752448260" observedRunningTime="2025-09-09 04:02:18.694293338 +0000 UTC m=+10.504934256" watchObservedRunningTime="2025-09-09 04:02:18.695869307 +0000 UTC m=+10.506510230" Sep 9 04:02:26.129818 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 9 04:02:26.286827 sshd[1770]: pam_unix(sshd:session): session closed for user core Sep 9 04:02:26.299423 systemd[1]: sshd@8-10.230.58.214:22-147.75.109.163:55606.service: Deactivated successfully. Sep 9 04:02:26.309533 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 04:02:26.310546 systemd[1]: session-11.scope: Consumed 7.679s CPU time, 141.7M memory peak, 0B memory swap peak. Sep 9 04:02:26.314416 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Sep 9 04:02:26.319572 systemd-logind[1485]: Removed session 11. Sep 9 04:02:32.165765 systemd[1]: Created slice kubepods-besteffort-pod7ad95d54_e8fa_423f_9e42_20e73347e152.slice - libcontainer container kubepods-besteffort-pod7ad95d54_e8fa_423f_9e42_20e73347e152.slice. Sep 9 04:02:32.291437 kubelet[2688]: I0909 04:02:32.289796 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7ad95d54-e8fa-423f-9e42-20e73347e152-typha-certs\") pod \"calico-typha-7849c6bbb8-569g8\" (UID: \"7ad95d54-e8fa-423f-9e42-20e73347e152\") " pod="calico-system/calico-typha-7849c6bbb8-569g8" Sep 9 04:02:32.292898 kubelet[2688]: I0909 04:02:32.292447 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hkk\" (UniqueName: \"kubernetes.io/projected/7ad95d54-e8fa-423f-9e42-20e73347e152-kube-api-access-48hkk\") pod \"calico-typha-7849c6bbb8-569g8\" (UID: \"7ad95d54-e8fa-423f-9e42-20e73347e152\") " pod="calico-system/calico-typha-7849c6bbb8-569g8" Sep 9 04:02:32.292898 kubelet[2688]: I0909 04:02:32.292611 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ad95d54-e8fa-423f-9e42-20e73347e152-tigera-ca-bundle\") pod \"calico-typha-7849c6bbb8-569g8\" (UID: \"7ad95d54-e8fa-423f-9e42-20e73347e152\") " pod="calico-system/calico-typha-7849c6bbb8-569g8" Sep 9 04:02:32.466401 systemd[1]: Created slice kubepods-besteffort-poda18eaf13_9549_486a_a11b_111b35f23f8b.slice - libcontainer container kubepods-besteffort-poda18eaf13_9549_486a_a11b_111b35f23f8b.slice. Sep 9 04:02:32.476170 containerd[1511]: time="2025-09-09T04:02:32.475864830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7849c6bbb8-569g8,Uid:7ad95d54-e8fa-423f-9e42-20e73347e152,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:32.495797 kubelet[2688]: I0909 04:02:32.495744 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-cni-net-dir\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.495966 kubelet[2688]: I0909 04:02:32.495809 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-policysync\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.495966 kubelet[2688]: I0909 04:02:32.495846 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-var-lib-calico\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.495966 kubelet[2688]: I0909 04:02:32.495876 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-cni-log-dir\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.495966 kubelet[2688]: I0909 04:02:32.495903 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a18eaf13-9549-486a-a11b-111b35f23f8b-tigera-ca-bundle\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.495966 kubelet[2688]: I0909 04:02:32.495944 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-cni-bin-dir\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496278 kubelet[2688]: I0909 04:02:32.495987 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-lib-modules\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496278 kubelet[2688]: I0909 04:02:32.496035 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a18eaf13-9549-486a-a11b-111b35f23f8b-node-certs\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496278 kubelet[2688]: I0909 04:02:32.496089 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-xtables-lock\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496278 kubelet[2688]: I0909 04:02:32.496208 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6gg\" (UniqueName: \"kubernetes.io/projected/a18eaf13-9549-486a-a11b-111b35f23f8b-kube-api-access-sv6gg\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496278 kubelet[2688]: I0909 04:02:32.496249 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-flexvol-driver-host\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.496772 kubelet[2688]: I0909 04:02:32.496286 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a18eaf13-9549-486a-a11b-111b35f23f8b-var-run-calico\") pod \"calico-node-h982s\" (UID: \"a18eaf13-9549-486a-a11b-111b35f23f8b\") " pod="calico-system/calico-node-h982s" Sep 9 04:02:32.548308 containerd[1511]: time="2025-09-09T04:02:32.547813291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:32.548308 containerd[1511]: time="2025-09-09T04:02:32.547946467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:32.548308 containerd[1511]: time="2025-09-09T04:02:32.547964366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:32.548308 containerd[1511]: time="2025-09-09T04:02:32.548119060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:32.617820 kubelet[2688]: E0909 04:02:32.616912 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.617820 kubelet[2688]: W0909 04:02:32.617425 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.621855 kubelet[2688]: E0909 04:02:32.618112 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.636402 kubelet[2688]: E0909 04:02:32.635548 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.636402 kubelet[2688]: W0909 04:02:32.635580 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.636402 kubelet[2688]: E0909 04:02:32.635608 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.662946 systemd[1]: Started cri-containerd-59c7200e5a6eb3155b61ee7a10156b56bea6062cb98522cbbb79be747616724b.scope - libcontainer container 59c7200e5a6eb3155b61ee7a10156b56bea6062cb98522cbbb79be747616724b. Sep 9 04:02:32.726071 kubelet[2688]: E0909 04:02:32.725731 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:32.774308 containerd[1511]: time="2025-09-09T04:02:32.774243653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h982s,Uid:a18eaf13-9549-486a-a11b-111b35f23f8b,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:32.795135 kubelet[2688]: E0909 04:02:32.793655 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.795135 kubelet[2688]: W0909 04:02:32.793738 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.795135 kubelet[2688]: E0909 04:02:32.793897 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.799541 kubelet[2688]: E0909 04:02:32.799105 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.799541 kubelet[2688]: W0909 04:02:32.799397 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.799541 kubelet[2688]: E0909 04:02:32.799425 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.801136 kubelet[2688]: E0909 04:02:32.800643 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.801136 kubelet[2688]: W0909 04:02:32.800677 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.801136 kubelet[2688]: E0909 04:02:32.800709 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.803427 kubelet[2688]: E0909 04:02:32.803079 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.803427 kubelet[2688]: W0909 04:02:32.803101 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.803427 kubelet[2688]: E0909 04:02:32.803140 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.805964 kubelet[2688]: E0909 04:02:32.805166 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.805964 kubelet[2688]: W0909 04:02:32.805197 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.805964 kubelet[2688]: E0909 04:02:32.805220 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.805964 kubelet[2688]: I0909 04:02:32.805253 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f3923f8-1ebf-4579-9a05-a6111fc5a148-kubelet-dir\") pod \"csi-node-driver-79jrx\" (UID: \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\") " pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:32.808704 kubelet[2688]: E0909 04:02:32.806666 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.808704 kubelet[2688]: W0909 04:02:32.806700 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.810073 kubelet[2688]: E0909 04:02:32.810046 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.810627 kubelet[2688]: E0909 04:02:32.810505 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.810627 kubelet[2688]: W0909 04:02:32.810530 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.810627 kubelet[2688]: E0909 04:02:32.810588 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.812596 kubelet[2688]: E0909 04:02:32.812424 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.812596 kubelet[2688]: W0909 04:02:32.812446 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.812596 kubelet[2688]: E0909 04:02:32.812502 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.812995 kubelet[2688]: E0909 04:02:32.812879 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.812995 kubelet[2688]: W0909 04:02:32.812926 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.813968 kubelet[2688]: E0909 04:02:32.813377 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.813968 kubelet[2688]: E0909 04:02:32.813763 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.813968 kubelet[2688]: W0909 04:02:32.813779 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.813968 kubelet[2688]: E0909 04:02:32.813799 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.814768 kubelet[2688]: E0909 04:02:32.814562 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.814768 kubelet[2688]: W0909 04:02:32.814578 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.814768 kubelet[2688]: E0909 04:02:32.814595 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.815308 kubelet[2688]: E0909 04:02:32.814994 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.815308 kubelet[2688]: W0909 04:02:32.815018 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.815308 kubelet[2688]: E0909 04:02:32.815038 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.815296 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.816450 kubelet[2688]: W0909 04:02:32.815400 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.815418 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.815759 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.816450 kubelet[2688]: W0909 04:02:32.815775 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.815792 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.816084 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.816450 kubelet[2688]: W0909 04:02:32.816100 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.816115 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.816450 kubelet[2688]: E0909 04:02:32.816429 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.817187 kubelet[2688]: W0909 04:02:32.816444 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.817187 kubelet[2688]: E0909 04:02:32.816460 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.819842 kubelet[2688]: E0909 04:02:32.818892 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.819842 kubelet[2688]: W0909 04:02:32.818916 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.819842 kubelet[2688]: E0909 04:02:32.818949 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.820625 kubelet[2688]: E0909 04:02:32.820581 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.820625 kubelet[2688]: W0909 04:02:32.820603 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.820625 kubelet[2688]: E0909 04:02:32.820622 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.821807 kubelet[2688]: E0909 04:02:32.821647 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.821807 kubelet[2688]: W0909 04:02:32.821669 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.821807 kubelet[2688]: E0909 04:02:32.821698 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.822967 kubelet[2688]: E0909 04:02:32.822498 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.822967 kubelet[2688]: W0909 04:02:32.822519 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.822967 kubelet[2688]: E0909 04:02:32.822538 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.822967 kubelet[2688]: E0909 04:02:32.822855 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.822967 kubelet[2688]: W0909 04:02:32.822870 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.822967 kubelet[2688]: E0909 04:02:32.822887 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.823325 kubelet[2688]: E0909 04:02:32.823260 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.823325 kubelet[2688]: W0909 04:02:32.823291 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.823325 kubelet[2688]: E0909 04:02:32.823310 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.824565 kubelet[2688]: E0909 04:02:32.823657 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.824565 kubelet[2688]: W0909 04:02:32.823679 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.824565 kubelet[2688]: E0909 04:02:32.823755 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.868530 containerd[1511]: time="2025-09-09T04:02:32.868038732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:02:32.870515 containerd[1511]: time="2025-09-09T04:02:32.868325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:02:32.872523 containerd[1511]: time="2025-09-09T04:02:32.870808469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:32.872523 containerd[1511]: time="2025-09-09T04:02:32.870973857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:02:32.875513 containerd[1511]: time="2025-09-09T04:02:32.875434470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7849c6bbb8-569g8,Uid:7ad95d54-e8fa-423f-9e42-20e73347e152,Namespace:calico-system,Attempt:0,} returns sandbox id \"59c7200e5a6eb3155b61ee7a10156b56bea6062cb98522cbbb79be747616724b\"" Sep 9 04:02:32.884168 containerd[1511]: time="2025-09-09T04:02:32.884125418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 04:02:32.907917 kubelet[2688]: E0909 04:02:32.907861 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.908306 kubelet[2688]: W0909 04:02:32.908157 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.908306 kubelet[2688]: E0909 04:02:32.908197 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.909071 kubelet[2688]: I0909 04:02:32.908284 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7677\" (UniqueName: \"kubernetes.io/projected/4f3923f8-1ebf-4579-9a05-a6111fc5a148-kube-api-access-q7677\") pod \"csi-node-driver-79jrx\" (UID: \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\") " pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:32.909071 kubelet[2688]: E0909 04:02:32.908895 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.909071 kubelet[2688]: W0909 04:02:32.908924 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.909071 kubelet[2688]: E0909 04:02:32.908950 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.909623 kubelet[2688]: E0909 04:02:32.909548 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.909623 kubelet[2688]: W0909 04:02:32.909570 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.909623 kubelet[2688]: E0909 04:02:32.909596 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.911597 kubelet[2688]: E0909 04:02:32.910925 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.911597 kubelet[2688]: W0909 04:02:32.910948 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.911597 kubelet[2688]: E0909 04:02:32.910989 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.912458 kubelet[2688]: I0909 04:02:32.911030 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f3923f8-1ebf-4579-9a05-a6111fc5a148-registration-dir\") pod \"csi-node-driver-79jrx\" (UID: \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\") " pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:32.912458 kubelet[2688]: E0909 04:02:32.912179 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.912458 kubelet[2688]: W0909 04:02:32.912207 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.912458 kubelet[2688]: E0909 04:02:32.912246 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.913919 kubelet[2688]: E0909 04:02:32.913623 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.913919 kubelet[2688]: W0909 04:02:32.913674 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.913919 kubelet[2688]: E0909 04:02:32.913752 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.915788 kubelet[2688]: E0909 04:02:32.915131 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.915788 kubelet[2688]: W0909 04:02:32.915152 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.915788 kubelet[2688]: E0909 04:02:32.915170 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.915788 kubelet[2688]: I0909 04:02:32.915238 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f3923f8-1ebf-4579-9a05-a6111fc5a148-socket-dir\") pod \"csi-node-driver-79jrx\" (UID: \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\") " pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:32.916865 kubelet[2688]: E0909 04:02:32.916248 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.916865 kubelet[2688]: W0909 04:02:32.916337 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.916865 kubelet[2688]: E0909 04:02:32.916415 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.918255 kubelet[2688]: E0909 04:02:32.917420 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.918255 kubelet[2688]: W0909 04:02:32.917440 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.918255 kubelet[2688]: E0909 04:02:32.917487 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.918255 kubelet[2688]: I0909 04:02:32.917529 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4f3923f8-1ebf-4579-9a05-a6111fc5a148-varrun\") pod \"csi-node-driver-79jrx\" (UID: \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\") " pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:32.919999 kubelet[2688]: E0909 04:02:32.919146 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.919999 kubelet[2688]: W0909 04:02:32.919167 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.919999 kubelet[2688]: E0909 04:02:32.919296 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.923953 kubelet[2688]: E0909 04:02:32.922391 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.923953 kubelet[2688]: W0909 04:02:32.922415 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.923953 kubelet[2688]: E0909 04:02:32.923562 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.923953 kubelet[2688]: E0909 04:02:32.923819 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.923953 kubelet[2688]: W0909 04:02:32.923834 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.924537 kubelet[2688]: E0909 04:02:32.924444 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.925352 kubelet[2688]: E0909 04:02:32.925115 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.925352 kubelet[2688]: W0909 04:02:32.925338 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.925352 kubelet[2688]: E0909 04:02:32.925374 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.927022 kubelet[2688]: E0909 04:02:32.926477 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.927022 kubelet[2688]: W0909 04:02:32.926499 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.927022 kubelet[2688]: E0909 04:02:32.926525 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.927555 kubelet[2688]: E0909 04:02:32.927398 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.927555 kubelet[2688]: W0909 04:02:32.927448 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.927555 kubelet[2688]: E0909 04:02:32.927470 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.930634 kubelet[2688]: E0909 04:02:32.928323 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.930634 kubelet[2688]: W0909 04:02:32.928346 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.931655 kubelet[2688]: E0909 04:02:32.930802 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.931896 kubelet[2688]: E0909 04:02:32.931874 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:32.932053 kubelet[2688]: W0909 04:02:32.931986 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:32.932518 kubelet[2688]: E0909 04:02:32.932441 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:32.940604 systemd[1]: Started cri-containerd-6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf.scope - libcontainer container 6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf. Sep 9 04:02:33.020054 kubelet[2688]: E0909 04:02:33.019652 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.020054 kubelet[2688]: W0909 04:02:33.019703 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.020054 kubelet[2688]: E0909 04:02:33.019734 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.020615 kubelet[2688]: E0909 04:02:33.020428 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.020615 kubelet[2688]: W0909 04:02:33.020444 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.020615 kubelet[2688]: E0909 04:02:33.020585 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.029010 kubelet[2688]: E0909 04:02:33.022497 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.029010 kubelet[2688]: W0909 04:02:33.022519 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.029010 kubelet[2688]: E0909 04:02:33.022546 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.029010 kubelet[2688]: E0909 04:02:33.028320 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.029010 kubelet[2688]: W0909 04:02:33.028340 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.030049 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.034744 kubelet[2688]: W0909 04:02:33.030072 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.030958 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.034744 kubelet[2688]: W0909 04:02:33.031095 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.031120 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.031828 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.031930 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.034744 kubelet[2688]: W0909 04:02:33.031948 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.031964 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.034744 kubelet[2688]: E0909 04:02:33.031984 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.035301 kubelet[2688]: E0909 04:02:33.034349 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.035301 kubelet[2688]: W0909 04:02:33.034387 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.035301 kubelet[2688]: E0909 04:02:33.034428 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.036397 kubelet[2688]: E0909 04:02:33.035637 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.036397 kubelet[2688]: W0909 04:02:33.035659 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.036397 kubelet[2688]: E0909 04:02:33.036310 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.038028 kubelet[2688]: E0909 04:02:33.037481 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.038028 kubelet[2688]: W0909 04:02:33.037502 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.038028 kubelet[2688]: E0909 04:02:33.037553 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.038600 kubelet[2688]: E0909 04:02:33.038434 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.038600 kubelet[2688]: W0909 04:02:33.038450 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.038600 kubelet[2688]: E0909 04:02:33.038503 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.039935 kubelet[2688]: E0909 04:02:33.039576 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.039935 kubelet[2688]: W0909 04:02:33.039596 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.039935 kubelet[2688]: E0909 04:02:33.039872 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.040618 kubelet[2688]: E0909 04:02:33.040413 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.040618 kubelet[2688]: W0909 04:02:33.040432 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.040618 kubelet[2688]: E0909 04:02:33.040508 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.041524 kubelet[2688]: E0909 04:02:33.041215 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.041524 kubelet[2688]: W0909 04:02:33.041237 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.041524 kubelet[2688]: E0909 04:02:33.041279 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.044625 kubelet[2688]: E0909 04:02:33.044249 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.044625 kubelet[2688]: W0909 04:02:33.044271 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.045384 kubelet[2688]: E0909 04:02:33.044917 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.045884 kubelet[2688]: E0909 04:02:33.045796 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.045884 kubelet[2688]: W0909 04:02:33.045818 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.046667 kubelet[2688]: E0909 04:02:33.046120 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.046867 kubelet[2688]: E0909 04:02:33.046846 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.047028 kubelet[2688]: W0909 04:02:33.046995 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.047464 kubelet[2688]: E0909 04:02:33.047223 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.047751 kubelet[2688]: E0909 04:02:33.047729 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.048019 kubelet[2688]: W0909 04:02:33.047877 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.048130 kubelet[2688]: E0909 04:02:33.048108 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.048344 kubelet[2688]: E0909 04:02:33.048324 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.048616 kubelet[2688]: W0909 04:02:33.048477 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.048616 kubelet[2688]: E0909 04:02:33.048504 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.049110 kubelet[2688]: E0909 04:02:33.049038 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.049110 kubelet[2688]: W0909 04:02:33.049057 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.049110 kubelet[2688]: E0909 04:02:33.049075 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:33.063095 containerd[1511]: time="2025-09-09T04:02:33.063041069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h982s,Uid:a18eaf13-9549-486a-a11b-111b35f23f8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\"" Sep 9 04:02:33.067216 kubelet[2688]: E0909 04:02:33.067121 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:33.067216 kubelet[2688]: W0909 04:02:33.067147 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:33.067216 kubelet[2688]: E0909 04:02:33.067173 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:34.593505 kubelet[2688]: E0909 04:02:34.591736 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:34.745006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096478574.mount: Deactivated successfully. Sep 9 04:02:36.593788 kubelet[2688]: E0909 04:02:36.593586 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:36.650627 containerd[1511]: time="2025-09-09T04:02:36.650254453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:36.652948 containerd[1511]: time="2025-09-09T04:02:36.652410245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 04:02:36.653660 containerd[1511]: time="2025-09-09T04:02:36.653584875Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:36.657396 containerd[1511]: time="2025-09-09T04:02:36.657221278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:36.658784 containerd[1511]: time="2025-09-09T04:02:36.658710802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.774511744s" Sep 9 04:02:36.659175 containerd[1511]: time="2025-09-09T04:02:36.658937844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 04:02:36.666636 containerd[1511]: time="2025-09-09T04:02:36.666448111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 04:02:36.698350 containerd[1511]: time="2025-09-09T04:02:36.698296527Z" level=info msg="CreateContainer within sandbox \"59c7200e5a6eb3155b61ee7a10156b56bea6062cb98522cbbb79be747616724b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 04:02:36.747805 containerd[1511]: time="2025-09-09T04:02:36.747707596Z" level=info msg="CreateContainer within sandbox \"59c7200e5a6eb3155b61ee7a10156b56bea6062cb98522cbbb79be747616724b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66f1d6cd5c42913d808200e9da619533b05f61c3dd62058a90e9f7f70dc12c98\"" Sep 9 04:02:36.751211 containerd[1511]: time="2025-09-09T04:02:36.751148920Z" level=info msg="StartContainer for \"66f1d6cd5c42913d808200e9da619533b05f61c3dd62058a90e9f7f70dc12c98\"" Sep 9 04:02:36.827886 systemd[1]: Started cri-containerd-66f1d6cd5c42913d808200e9da619533b05f61c3dd62058a90e9f7f70dc12c98.scope - libcontainer container 66f1d6cd5c42913d808200e9da619533b05f61c3dd62058a90e9f7f70dc12c98. Sep 9 04:02:36.915176 containerd[1511]: time="2025-09-09T04:02:36.914892333Z" level=info msg="StartContainer for \"66f1d6cd5c42913d808200e9da619533b05f61c3dd62058a90e9f7f70dc12c98\" returns successfully" Sep 9 04:02:37.803893 kubelet[2688]: I0909 04:02:37.803711 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7849c6bbb8-569g8" podStartSLOduration=2.023028806 podStartE2EDuration="5.803252863s" podCreationTimestamp="2025-09-09 04:02:32 +0000 UTC" firstStartedPulling="2025-09-09 04:02:32.883030958 +0000 UTC m=+24.693671859" lastFinishedPulling="2025-09-09 04:02:36.663255002 +0000 UTC m=+28.473895916" observedRunningTime="2025-09-09 04:02:37.802798435 +0000 UTC m=+29.613439382" watchObservedRunningTime="2025-09-09 04:02:37.803252863 +0000 UTC m=+29.613893777" Sep 9 04:02:37.871001 kubelet[2688]: E0909 04:02:37.870921 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.871001 kubelet[2688]: W0909 04:02:37.870984 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.871244 kubelet[2688]: E0909 04:02:37.871050 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.872744 kubelet[2688]: E0909 04:02:37.872719 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.872872 kubelet[2688]: W0909 04:02:37.872771 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.872872 kubelet[2688]: E0909 04:02:37.872793 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.873192 kubelet[2688]: E0909 04:02:37.873132 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.873192 kubelet[2688]: W0909 04:02:37.873148 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.873192 kubelet[2688]: E0909 04:02:37.873165 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.873572 kubelet[2688]: E0909 04:02:37.873548 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.873572 kubelet[2688]: W0909 04:02:37.873569 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.873720 kubelet[2688]: E0909 04:02:37.873587 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.873934 kubelet[2688]: E0909 04:02:37.873910 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.873934 kubelet[2688]: W0909 04:02:37.873932 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.874246 kubelet[2688]: E0909 04:02:37.873949 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.874325 kubelet[2688]: E0909 04:02:37.874292 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.874325 kubelet[2688]: W0909 04:02:37.874307 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.874489 kubelet[2688]: E0909 04:02:37.874324 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.874948 kubelet[2688]: E0909 04:02:37.874916 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.874948 kubelet[2688]: W0909 04:02:37.874939 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.874956 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.875302 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.876264 kubelet[2688]: W0909 04:02:37.875318 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.875333 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.875713 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.876264 kubelet[2688]: W0909 04:02:37.875728 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.875745 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.876043 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.876264 kubelet[2688]: W0909 04:02:37.876058 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876264 kubelet[2688]: E0909 04:02:37.876073 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.876968 kubelet[2688]: E0909 04:02:37.876404 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.876968 kubelet[2688]: W0909 04:02:37.876418 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876968 kubelet[2688]: E0909 04:02:37.876435 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.876968 kubelet[2688]: E0909 04:02:37.876721 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.876968 kubelet[2688]: W0909 04:02:37.876736 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.876968 kubelet[2688]: E0909 04:02:37.876753 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.877321 kubelet[2688]: E0909 04:02:37.877056 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.877321 kubelet[2688]: W0909 04:02:37.877103 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.877321 kubelet[2688]: E0909 04:02:37.877123 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.877533 kubelet[2688]: E0909 04:02:37.877468 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.877533 kubelet[2688]: W0909 04:02:37.877482 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.877533 kubelet[2688]: E0909 04:02:37.877497 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.878388 kubelet[2688]: E0909 04:02:37.877786 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.878388 kubelet[2688]: W0909 04:02:37.877807 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.878388 kubelet[2688]: E0909 04:02:37.877824 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.971913 kubelet[2688]: E0909 04:02:37.971870 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.971913 kubelet[2688]: W0909 04:02:37.971902 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.972464 kubelet[2688]: E0909 04:02:37.971931 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.972464 kubelet[2688]: E0909 04:02:37.972283 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.972464 kubelet[2688]: W0909 04:02:37.972299 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.972464 kubelet[2688]: E0909 04:02:37.972328 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.972876 kubelet[2688]: E0909 04:02:37.972659 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.972876 kubelet[2688]: W0909 04:02:37.972675 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.972876 kubelet[2688]: E0909 04:02:37.972700 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.973096 kubelet[2688]: E0909 04:02:37.973080 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.973332 kubelet[2688]: W0909 04:02:37.973095 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.973332 kubelet[2688]: E0909 04:02:37.973120 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.973781 kubelet[2688]: E0909 04:02:37.973636 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.973781 kubelet[2688]: W0909 04:02:37.973662 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.973781 kubelet[2688]: E0909 04:02:37.973697 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.974587 kubelet[2688]: E0909 04:02:37.974383 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.974587 kubelet[2688]: W0909 04:02:37.974409 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.974587 kubelet[2688]: E0909 04:02:37.974455 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.975101 kubelet[2688]: E0909 04:02:37.975005 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.975101 kubelet[2688]: W0909 04:02:37.975024 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.975101 kubelet[2688]: E0909 04:02:37.975088 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.975831 kubelet[2688]: E0909 04:02:37.975607 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.975831 kubelet[2688]: W0909 04:02:37.975623 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.975831 kubelet[2688]: E0909 04:02:37.975669 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.976385 kubelet[2688]: E0909 04:02:37.976205 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.976385 kubelet[2688]: W0909 04:02:37.976224 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.976385 kubelet[2688]: E0909 04:02:37.976270 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.976974 kubelet[2688]: E0909 04:02:37.976730 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.976974 kubelet[2688]: W0909 04:02:37.976745 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.976974 kubelet[2688]: E0909 04:02:37.976798 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.977305 kubelet[2688]: E0909 04:02:37.977093 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.977305 kubelet[2688]: W0909 04:02:37.977108 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.977705 kubelet[2688]: E0909 04:02:37.977505 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.978013 kubelet[2688]: E0909 04:02:37.977865 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.978013 kubelet[2688]: W0909 04:02:37.977880 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.978013 kubelet[2688]: E0909 04:02:37.977997 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.978422 kubelet[2688]: E0909 04:02:37.978402 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.978690 kubelet[2688]: W0909 04:02:37.978514 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.978690 kubelet[2688]: E0909 04:02:37.978684 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.979007 kubelet[2688]: E0909 04:02:37.978987 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.979483 kubelet[2688]: W0909 04:02:37.979103 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.979483 kubelet[2688]: E0909 04:02:37.979142 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.979651 kubelet[2688]: E0909 04:02:37.979535 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.979651 kubelet[2688]: W0909 04:02:37.979552 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.979777 kubelet[2688]: E0909 04:02:37.979683 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.980126 kubelet[2688]: E0909 04:02:37.980064 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.980126 kubelet[2688]: W0909 04:02:37.980120 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.980288 kubelet[2688]: E0909 04:02:37.980149 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.980881 kubelet[2688]: E0909 04:02:37.980799 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.980881 kubelet[2688]: W0909 04:02:37.980820 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.980881 kubelet[2688]: E0909 04:02:37.980838 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:37.982786 kubelet[2688]: E0909 04:02:37.982762 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:37.982786 kubelet[2688]: W0909 04:02:37.982784 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:37.982916 kubelet[2688]: E0909 04:02:37.982803 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.453405 containerd[1511]: time="2025-09-09T04:02:38.453248300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:38.455511 containerd[1511]: time="2025-09-09T04:02:38.455458157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 04:02:38.456904 containerd[1511]: time="2025-09-09T04:02:38.456866129Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:38.460572 containerd[1511]: time="2025-09-09T04:02:38.460514452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:38.462893 containerd[1511]: time="2025-09-09T04:02:38.462826660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.796320782s" Sep 9 04:02:38.462893 containerd[1511]: time="2025-09-09T04:02:38.462877492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 04:02:38.468534 containerd[1511]: time="2025-09-09T04:02:38.468470297Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 04:02:38.536520 containerd[1511]: time="2025-09-09T04:02:38.535773058Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28\"" Sep 9 04:02:38.537124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679569338.mount: Deactivated successfully. Sep 9 04:02:38.540947 containerd[1511]: time="2025-09-09T04:02:38.540582139Z" level=info msg="StartContainer for \"54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28\"" Sep 9 04:02:38.592258 kubelet[2688]: E0909 04:02:38.592184 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:38.734704 systemd[1]: Started cri-containerd-54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28.scope - libcontainer container 54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28. Sep 9 04:02:38.786524 kubelet[2688]: E0909 04:02:38.786308 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.786524 kubelet[2688]: W0909 04:02:38.786476 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.787020 kubelet[2688]: E0909 04:02:38.786787 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.787507 kubelet[2688]: E0909 04:02:38.787320 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.787507 kubelet[2688]: W0909 04:02:38.787340 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.787507 kubelet[2688]: E0909 04:02:38.787357 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.789293 kubelet[2688]: E0909 04:02:38.788220 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.789293 kubelet[2688]: W0909 04:02:38.788240 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.789293 kubelet[2688]: E0909 04:02:38.788270 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.789561 kubelet[2688]: E0909 04:02:38.789488 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.789561 kubelet[2688]: W0909 04:02:38.789517 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.789561 kubelet[2688]: E0909 04:02:38.789537 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.789997 kubelet[2688]: E0909 04:02:38.789968 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.790084 kubelet[2688]: W0909 04:02:38.789990 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.790084 kubelet[2688]: E0909 04:02:38.790070 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.790395 kubelet[2688]: E0909 04:02:38.790354 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.790543 kubelet[2688]: W0909 04:02:38.790409 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.790543 kubelet[2688]: E0909 04:02:38.790430 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.790760 kubelet[2688]: E0909 04:02:38.790702 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.790760 kubelet[2688]: W0909 04:02:38.790717 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.790760 kubelet[2688]: E0909 04:02:38.790732 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.791540 kubelet[2688]: E0909 04:02:38.790995 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.791540 kubelet[2688]: W0909 04:02:38.791010 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.791540 kubelet[2688]: E0909 04:02:38.791026 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.791540 kubelet[2688]: E0909 04:02:38.791334 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.791540 kubelet[2688]: W0909 04:02:38.791348 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.791540 kubelet[2688]: E0909 04:02:38.791405 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.792384 kubelet[2688]: E0909 04:02:38.791713 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.792384 kubelet[2688]: W0909 04:02:38.791729 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.792384 kubelet[2688]: E0909 04:02:38.791744 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.792384 kubelet[2688]: E0909 04:02:38.792027 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.792384 kubelet[2688]: W0909 04:02:38.792067 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.792384 kubelet[2688]: E0909 04:02:38.792088 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.792384 kubelet[2688]: E0909 04:02:38.792345 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.794661 kubelet[2688]: W0909 04:02:38.792404 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.792425 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.792721 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.794661 kubelet[2688]: W0909 04:02:38.792737 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.792752 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.793010 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.794661 kubelet[2688]: W0909 04:02:38.793025 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.793041 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.794661 kubelet[2688]: E0909 04:02:38.793302 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:02:38.794661 kubelet[2688]: W0909 04:02:38.793316 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:02:38.795128 kubelet[2688]: E0909 04:02:38.793345 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:02:38.810234 containerd[1511]: time="2025-09-09T04:02:38.810174274Z" level=info msg="StartContainer for \"54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28\" returns successfully" Sep 9 04:02:38.830853 systemd[1]: cri-containerd-54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28.scope: Deactivated successfully. Sep 9 04:02:38.878454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28-rootfs.mount: Deactivated successfully. Sep 9 04:02:38.941613 containerd[1511]: time="2025-09-09T04:02:38.893266456Z" level=info msg="shim disconnected" id=54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28 namespace=k8s.io Sep 9 04:02:38.941613 containerd[1511]: time="2025-09-09T04:02:38.941540961Z" level=warning msg="cleaning up after shim disconnected" id=54147b8d76cdf08bd2b7b96060eb95f5bb2e4680b16ff5109676a56ad8a86e28 namespace=k8s.io Sep 9 04:02:38.942136 containerd[1511]: time="2025-09-09T04:02:38.941573101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 04:02:39.793631 containerd[1511]: time="2025-09-09T04:02:39.793066605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 04:02:40.592974 kubelet[2688]: E0909 04:02:40.592890 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:42.594837 kubelet[2688]: E0909 04:02:42.594674 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:44.593650 kubelet[2688]: E0909 04:02:44.592846 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:46.486894 containerd[1511]: time="2025-09-09T04:02:46.486751067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:46.490627 containerd[1511]: time="2025-09-09T04:02:46.490540930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 04:02:46.492107 containerd[1511]: time="2025-09-09T04:02:46.492035714Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:46.496400 containerd[1511]: time="2025-09-09T04:02:46.495677177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:02:46.497260 containerd[1511]: time="2025-09-09T04:02:46.497212257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.704037428s" Sep 9 04:02:46.497445 containerd[1511]: time="2025-09-09T04:02:46.497414643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 04:02:46.502099 containerd[1511]: time="2025-09-09T04:02:46.502035481Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 04:02:46.538898 containerd[1511]: time="2025-09-09T04:02:46.538673514Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6\"" Sep 9 04:02:46.540207 containerd[1511]: time="2025-09-09T04:02:46.539729731Z" level=info msg="StartContainer for \"414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6\"" Sep 9 04:02:46.592283 kubelet[2688]: E0909 04:02:46.591749 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:46.613896 systemd[1]: run-containerd-runc-k8s.io-414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6-runc.R7Fvjr.mount: Deactivated successfully. Sep 9 04:02:46.628635 systemd[1]: Started cri-containerd-414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6.scope - libcontainer container 414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6. Sep 9 04:02:46.696954 containerd[1511]: time="2025-09-09T04:02:46.696891381Z" level=info msg="StartContainer for \"414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6\" returns successfully" Sep 9 04:02:48.191608 systemd[1]: cri-containerd-414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6.scope: Deactivated successfully. Sep 9 04:02:48.247143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6-rootfs.mount: Deactivated successfully. Sep 9 04:02:48.252914 containerd[1511]: time="2025-09-09T04:02:48.250487960Z" level=info msg="shim disconnected" id=414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6 namespace=k8s.io Sep 9 04:02:48.252914 containerd[1511]: time="2025-09-09T04:02:48.250663352Z" level=warning msg="cleaning up after shim disconnected" id=414b820206ef6cdfc0e19a7cf9bfc0038e1f2e401c1237d3c22afd966b1266a6 namespace=k8s.io Sep 9 04:02:48.252914 containerd[1511]: time="2025-09-09T04:02:48.250693571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 04:02:48.268665 kubelet[2688]: I0909 04:02:48.268612 2688 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 04:02:48.358145 systemd[1]: Created slice kubepods-burstable-pod77d1e297_6db0_4528_90d8_7bdccecd3fb8.slice - libcontainer container kubepods-burstable-pod77d1e297_6db0_4528_90d8_7bdccecd3fb8.slice. Sep 9 04:02:48.378429 systemd[1]: Created slice kubepods-burstable-pod485695d1_af74_4c84_bc1e_c3693d7e6d5c.slice - libcontainer container kubepods-burstable-pod485695d1_af74_4c84_bc1e_c3693d7e6d5c.slice. Sep 9 04:02:48.396606 systemd[1]: Created slice kubepods-besteffort-pod4164c1d5_1085_4008_9d19_95f326c5d9e7.slice - libcontainer container kubepods-besteffort-pod4164c1d5_1085_4008_9d19_95f326c5d9e7.slice. Sep 9 04:02:48.416567 systemd[1]: Created slice kubepods-besteffort-pod3fe8fd08_d368_4dc2_854d_3e82426c7226.slice - libcontainer container kubepods-besteffort-pod3fe8fd08_d368_4dc2_854d_3e82426c7226.slice. Sep 9 04:02:48.449433 systemd[1]: Created slice kubepods-besteffort-pod60ea252b_bb65_4eeb_baac_a9493773063e.slice - libcontainer container kubepods-besteffort-pod60ea252b_bb65_4eeb_baac_a9493773063e.slice. Sep 9 04:02:48.468394 kubelet[2688]: I0909 04:02:48.467571 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmkx\" (UniqueName: \"kubernetes.io/projected/60ea252b-bb65-4eeb-baac-a9493773063e-kube-api-access-hvmkx\") pod \"goldmane-7988f88666-gx7p8\" (UID: \"60ea252b-bb65-4eeb-baac-a9493773063e\") " pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:48.470279 kubelet[2688]: I0909 04:02:48.468680 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/60ea252b-bb65-4eeb-baac-a9493773063e-goldmane-key-pair\") pod \"goldmane-7988f88666-gx7p8\" (UID: \"60ea252b-bb65-4eeb-baac-a9493773063e\") " pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:48.470279 kubelet[2688]: I0909 04:02:48.468952 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4164c1d5-1085-4008-9d19-95f326c5d9e7-calico-apiserver-certs\") pod \"calico-apiserver-7d865dc46-shtjq\" (UID: \"4164c1d5-1085-4008-9d19-95f326c5d9e7\") " pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" Sep 9 04:02:48.469773 systemd[1]: Created slice kubepods-besteffort-pode9173aa2_a083_4753_8a52_dc5c6feaca7e.slice - libcontainer container kubepods-besteffort-pode9173aa2_a083_4753_8a52_dc5c6feaca7e.slice. Sep 9 04:02:48.471490 kubelet[2688]: I0909 04:02:48.470756 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e1ceb73-abd9-444a-9955-f6d015b27503-calico-apiserver-certs\") pod \"calico-apiserver-7d865dc46-rrs7b\" (UID: \"2e1ceb73-abd9-444a-9955-f6d015b27503\") " pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" Sep 9 04:02:48.471490 kubelet[2688]: I0909 04:02:48.470920 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2j6\" (UniqueName: \"kubernetes.io/projected/2e1ceb73-abd9-444a-9955-f6d015b27503-kube-api-access-pp2j6\") pod \"calico-apiserver-7d865dc46-rrs7b\" (UID: \"2e1ceb73-abd9-444a-9955-f6d015b27503\") " pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" Sep 9 04:02:48.471490 kubelet[2688]: I0909 04:02:48.470976 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-backend-key-pair\") pod \"whisker-6cf9bd4f5f-khjkv\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " pod="calico-system/whisker-6cf9bd4f5f-khjkv" Sep 9 04:02:48.471490 kubelet[2688]: I0909 04:02:48.471031 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ea252b-bb65-4eeb-baac-a9493773063e-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gx7p8\" (UID: \"60ea252b-bb65-4eeb-baac-a9493773063e\") " pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:48.471490 kubelet[2688]: I0909 04:02:48.471100 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf59\" (UniqueName: \"kubernetes.io/projected/77d1e297-6db0-4528-90d8-7bdccecd3fb8-kube-api-access-mcf59\") pod \"coredns-7c65d6cfc9-tdpm2\" (UID: \"77d1e297-6db0-4528-90d8-7bdccecd3fb8\") " pod="kube-system/coredns-7c65d6cfc9-tdpm2" Sep 9 04:02:48.472570 kubelet[2688]: I0909 04:02:48.471197 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-ca-bundle\") pod \"whisker-6cf9bd4f5f-khjkv\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " pod="calico-system/whisker-6cf9bd4f5f-khjkv" Sep 9 04:02:48.472570 kubelet[2688]: I0909 04:02:48.472442 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xr5p\" (UniqueName: \"kubernetes.io/projected/485695d1-af74-4c84-bc1e-c3693d7e6d5c-kube-api-access-6xr5p\") pod \"coredns-7c65d6cfc9-6m2tz\" (UID: \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\") " pod="kube-system/coredns-7c65d6cfc9-6m2tz" Sep 9 04:02:48.472570 kubelet[2688]: I0909 04:02:48.472500 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktlt\" (UniqueName: \"kubernetes.io/projected/4164c1d5-1085-4008-9d19-95f326c5d9e7-kube-api-access-cktlt\") pod \"calico-apiserver-7d865dc46-shtjq\" (UID: \"4164c1d5-1085-4008-9d19-95f326c5d9e7\") " pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" Sep 9 04:02:48.474382 kubelet[2688]: I0909 04:02:48.472532 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtj5d\" (UniqueName: \"kubernetes.io/projected/e9173aa2-a083-4753-8a52-dc5c6feaca7e-kube-api-access-rtj5d\") pod \"calico-kube-controllers-6c8cd869cf-qw87t\" (UID: \"e9173aa2-a083-4753-8a52-dc5c6feaca7e\") " pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" Sep 9 04:02:48.474382 kubelet[2688]: I0909 04:02:48.473199 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frp7n\" (UniqueName: \"kubernetes.io/projected/3fe8fd08-d368-4dc2-854d-3e82426c7226-kube-api-access-frp7n\") pod \"whisker-6cf9bd4f5f-khjkv\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " pod="calico-system/whisker-6cf9bd4f5f-khjkv" Sep 9 04:02:48.474754 kubelet[2688]: I0909 04:02:48.474621 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9173aa2-a083-4753-8a52-dc5c6feaca7e-tigera-ca-bundle\") pod \"calico-kube-controllers-6c8cd869cf-qw87t\" (UID: \"e9173aa2-a083-4753-8a52-dc5c6feaca7e\") " pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" Sep 9 04:02:48.474754 kubelet[2688]: I0909 04:02:48.474703 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ea252b-bb65-4eeb-baac-a9493773063e-config\") pod \"goldmane-7988f88666-gx7p8\" (UID: \"60ea252b-bb65-4eeb-baac-a9493773063e\") " pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:48.475084 kubelet[2688]: I0909 04:02:48.474938 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/485695d1-af74-4c84-bc1e-c3693d7e6d5c-config-volume\") pod \"coredns-7c65d6cfc9-6m2tz\" (UID: \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\") " pod="kube-system/coredns-7c65d6cfc9-6m2tz" Sep 9 04:02:48.475084 kubelet[2688]: I0909 04:02:48.475001 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77d1e297-6db0-4528-90d8-7bdccecd3fb8-config-volume\") pod \"coredns-7c65d6cfc9-tdpm2\" (UID: \"77d1e297-6db0-4528-90d8-7bdccecd3fb8\") " pod="kube-system/coredns-7c65d6cfc9-tdpm2" Sep 9 04:02:48.484154 systemd[1]: Created slice kubepods-besteffort-pod2e1ceb73_abd9_444a_9955_f6d015b27503.slice - libcontainer container kubepods-besteffort-pod2e1ceb73_abd9_444a_9955_f6d015b27503.slice. Sep 9 04:02:48.666202 systemd[1]: Created slice kubepods-besteffort-pod4f3923f8_1ebf_4579_9a05_a6111fc5a148.slice - libcontainer container kubepods-besteffort-pod4f3923f8_1ebf_4579_9a05_a6111fc5a148.slice. Sep 9 04:02:48.682799 containerd[1511]: time="2025-09-09T04:02:48.682705405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdpm2,Uid:77d1e297-6db0-4528-90d8-7bdccecd3fb8,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:48.684627 containerd[1511]: time="2025-09-09T04:02:48.684248823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79jrx,Uid:4f3923f8-1ebf-4579-9a05-a6111fc5a148,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:48.692654 containerd[1511]: time="2025-09-09T04:02:48.692603873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6m2tz,Uid:485695d1-af74-4c84-bc1e-c3693d7e6d5c,Namespace:kube-system,Attempt:0,}" Sep 9 04:02:48.705589 containerd[1511]: time="2025-09-09T04:02:48.705442819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-shtjq,Uid:4164c1d5-1085-4008-9d19-95f326c5d9e7,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:02:48.739472 containerd[1511]: time="2025-09-09T04:02:48.739244620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf9bd4f5f-khjkv,Uid:3fe8fd08-d368-4dc2-854d-3e82426c7226,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:48.777724 containerd[1511]: time="2025-09-09T04:02:48.777015646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gx7p8,Uid:60ea252b-bb65-4eeb-baac-a9493773063e,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:48.783259 containerd[1511]: time="2025-09-09T04:02:48.783200987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd869cf-qw87t,Uid:e9173aa2-a083-4753-8a52-dc5c6feaca7e,Namespace:calico-system,Attempt:0,}" Sep 9 04:02:48.817810 containerd[1511]: time="2025-09-09T04:02:48.817726628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-rrs7b,Uid:2e1ceb73-abd9-444a-9955-f6d015b27503,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:02:48.877270 containerd[1511]: time="2025-09-09T04:02:48.876760599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 04:02:49.311962 containerd[1511]: time="2025-09-09T04:02:49.311880443Z" level=error msg="Failed to destroy network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.317248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6-shm.mount: Deactivated successfully. Sep 9 04:02:49.332480 containerd[1511]: time="2025-09-09T04:02:49.332348679Z" level=error msg="encountered an error cleaning up failed sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.333040 containerd[1511]: time="2025-09-09T04:02:49.332990776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79jrx,Uid:4f3923f8-1ebf-4579-9a05-a6111fc5a148,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.336436 kubelet[2688]: E0909 04:02:49.335746 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.336436 kubelet[2688]: E0909 04:02:49.336061 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:49.336436 kubelet[2688]: E0909 04:02:49.336143 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79jrx" Sep 9 04:02:49.341300 kubelet[2688]: E0909 04:02:49.339865 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79jrx_calico-system(4f3923f8-1ebf-4579-9a05-a6111fc5a148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79jrx_calico-system(4f3923f8-1ebf-4579-9a05-a6111fc5a148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:49.342019 containerd[1511]: time="2025-09-09T04:02:49.341948644Z" level=error msg="Failed to destroy network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.345392 containerd[1511]: time="2025-09-09T04:02:49.342984102Z" level=error msg="encountered an error cleaning up failed sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.345640 containerd[1511]: time="2025-09-09T04:02:49.345591626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6m2tz,Uid:485695d1-af74-4c84-bc1e-c3693d7e6d5c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.346070 kubelet[2688]: E0909 04:02:49.346017 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.346187 kubelet[2688]: E0909 04:02:49.346106 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6m2tz" Sep 9 04:02:49.346187 kubelet[2688]: E0909 04:02:49.346140 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6m2tz" Sep 9 04:02:49.346314 kubelet[2688]: E0909 04:02:49.346238 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6m2tz_kube-system(485695d1-af74-4c84-bc1e-c3693d7e6d5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6m2tz_kube-system(485695d1-af74-4c84-bc1e-c3693d7e6d5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6m2tz" podUID="485695d1-af74-4c84-bc1e-c3693d7e6d5c" Sep 9 04:02:49.347337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d-shm.mount: Deactivated successfully. Sep 9 04:02:49.364846 containerd[1511]: time="2025-09-09T04:02:49.364594776Z" level=error msg="Failed to destroy network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.366432 containerd[1511]: time="2025-09-09T04:02:49.365799449Z" level=error msg="encountered an error cleaning up failed sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.367075 containerd[1511]: time="2025-09-09T04:02:49.366632939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-shtjq,Uid:4164c1d5-1085-4008-9d19-95f326c5d9e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.368428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821-shm.mount: Deactivated successfully. Sep 9 04:02:49.376819 kubelet[2688]: E0909 04:02:49.376505 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.376819 kubelet[2688]: E0909 04:02:49.376619 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" Sep 9 04:02:49.376819 kubelet[2688]: E0909 04:02:49.376656 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" Sep 9 04:02:49.378506 kubelet[2688]: E0909 04:02:49.378346 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d865dc46-shtjq_calico-apiserver(4164c1d5-1085-4008-9d19-95f326c5d9e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d865dc46-shtjq_calico-apiserver(4164c1d5-1085-4008-9d19-95f326c5d9e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" podUID="4164c1d5-1085-4008-9d19-95f326c5d9e7" Sep 9 04:02:49.430344 containerd[1511]: time="2025-09-09T04:02:49.429767077Z" level=error msg="Failed to destroy network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.434961 containerd[1511]: time="2025-09-09T04:02:49.434902991Z" level=error msg="encountered an error cleaning up failed sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.435104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b-shm.mount: Deactivated successfully. Sep 9 04:02:49.436772 containerd[1511]: time="2025-09-09T04:02:49.436719710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdpm2,Uid:77d1e297-6db0-4528-90d8-7bdccecd3fb8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.437126 containerd[1511]: time="2025-09-09T04:02:49.437089492Z" level=error msg="Failed to destroy network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.438563 containerd[1511]: time="2025-09-09T04:02:49.438525358Z" level=error msg="encountered an error cleaning up failed sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.439515 containerd[1511]: time="2025-09-09T04:02:49.439475430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf9bd4f5f-khjkv,Uid:3fe8fd08-d368-4dc2-854d-3e82426c7226,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.440103 kubelet[2688]: E0909 04:02:49.440041 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.440401 kubelet[2688]: E0909 04:02:49.440272 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.440571 kubelet[2688]: E0909 04:02:49.440539 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tdpm2" Sep 9 04:02:49.440798 kubelet[2688]: E0909 04:02:49.440753 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tdpm2" Sep 9 04:02:49.441058 kubelet[2688]: E0909 04:02:49.441000 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tdpm2_kube-system(77d1e297-6db0-4528-90d8-7bdccecd3fb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tdpm2_kube-system(77d1e297-6db0-4528-90d8-7bdccecd3fb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tdpm2" podUID="77d1e297-6db0-4528-90d8-7bdccecd3fb8" Sep 9 04:02:49.441560 containerd[1511]: time="2025-09-09T04:02:49.441515076Z" level=error msg="Failed to destroy network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.441822 kubelet[2688]: E0909 04:02:49.440688 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cf9bd4f5f-khjkv" Sep 9 04:02:49.441985 kubelet[2688]: E0909 04:02:49.441939 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cf9bd4f5f-khjkv" Sep 9 04:02:49.443717 kubelet[2688]: E0909 04:02:49.443286 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cf9bd4f5f-khjkv_calico-system(3fe8fd08-d368-4dc2-854d-3e82426c7226)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cf9bd4f5f-khjkv_calico-system(3fe8fd08-d368-4dc2-854d-3e82426c7226)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cf9bd4f5f-khjkv" podUID="3fe8fd08-d368-4dc2-854d-3e82426c7226" Sep 9 04:02:49.445540 containerd[1511]: time="2025-09-09T04:02:49.445300565Z" level=error msg="encountered an error cleaning up failed sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.446426 containerd[1511]: time="2025-09-09T04:02:49.445466972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd869cf-qw87t,Uid:e9173aa2-a083-4753-8a52-dc5c6feaca7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.448991 kubelet[2688]: E0909 04:02:49.446802 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.448991 kubelet[2688]: E0909 04:02:49.446870 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" Sep 9 04:02:49.448991 kubelet[2688]: E0909 04:02:49.446918 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" Sep 9 04:02:49.450626 kubelet[2688]: E0909 04:02:49.446986 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c8cd869cf-qw87t_calico-system(e9173aa2-a083-4753-8a52-dc5c6feaca7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c8cd869cf-qw87t_calico-system(e9173aa2-a083-4753-8a52-dc5c6feaca7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" podUID="e9173aa2-a083-4753-8a52-dc5c6feaca7e" Sep 9 04:02:49.468580 containerd[1511]: time="2025-09-09T04:02:49.468509743Z" level=error msg="Failed to destroy network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.469354 containerd[1511]: time="2025-09-09T04:02:49.469312671Z" level=error msg="encountered an error cleaning up failed sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.469620 containerd[1511]: time="2025-09-09T04:02:49.469561086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-rrs7b,Uid:2e1ceb73-abd9-444a-9955-f6d015b27503,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.471037 kubelet[2688]: E0909 04:02:49.470283 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.471037 kubelet[2688]: E0909 04:02:49.470389 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" Sep 9 04:02:49.471037 kubelet[2688]: E0909 04:02:49.470421 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" Sep 9 04:02:49.471295 kubelet[2688]: E0909 04:02:49.470489 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d865dc46-rrs7b_calico-apiserver(2e1ceb73-abd9-444a-9955-f6d015b27503)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d865dc46-rrs7b_calico-apiserver(2e1ceb73-abd9-444a-9955-f6d015b27503)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" podUID="2e1ceb73-abd9-444a-9955-f6d015b27503" Sep 9 04:02:49.473046 containerd[1511]: time="2025-09-09T04:02:49.472846839Z" level=error msg="Failed to destroy network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.474515 containerd[1511]: time="2025-09-09T04:02:49.473955589Z" level=error msg="encountered an error cleaning up failed sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.474847 containerd[1511]: time="2025-09-09T04:02:49.474722375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gx7p8,Uid:60ea252b-bb65-4eeb-baac-a9493773063e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.475467 kubelet[2688]: E0909 04:02:49.475390 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:49.475592 kubelet[2688]: E0909 04:02:49.475492 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:49.475592 kubelet[2688]: E0909 04:02:49.475524 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gx7p8" Sep 9 04:02:49.475679 kubelet[2688]: E0909 04:02:49.475585 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gx7p8_calico-system(60ea252b-bb65-4eeb-baac-a9493773063e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gx7p8_calico-system(60ea252b-bb65-4eeb-baac-a9493773063e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gx7p8" podUID="60ea252b-bb65-4eeb-baac-a9493773063e" Sep 9 04:02:49.864933 kubelet[2688]: I0909 04:02:49.864875 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:02:49.869322 kubelet[2688]: I0909 04:02:49.868349 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:02:49.878413 containerd[1511]: time="2025-09-09T04:02:49.876402236Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:02:49.878413 containerd[1511]: time="2025-09-09T04:02:49.877593483Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:02:49.879033 containerd[1511]: time="2025-09-09T04:02:49.878977665Z" level=info msg="Ensure that sandbox d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821 in task-service has been cleanup successfully" Sep 9 04:02:49.881466 kubelet[2688]: I0909 04:02:49.880909 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:02:49.881552 containerd[1511]: time="2025-09-09T04:02:49.879009204Z" level=info msg="Ensure that sandbox f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d in task-service has been cleanup successfully" Sep 9 04:02:49.884123 containerd[1511]: time="2025-09-09T04:02:49.883888338Z" level=info msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" Sep 9 04:02:49.888675 kubelet[2688]: I0909 04:02:49.888125 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:02:49.890590 containerd[1511]: time="2025-09-09T04:02:49.889589080Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:02:49.890590 containerd[1511]: time="2025-09-09T04:02:49.889892023Z" level=info msg="Ensure that sandbox c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a in task-service has been cleanup successfully" Sep 9 04:02:49.895063 kubelet[2688]: I0909 04:02:49.895023 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:02:49.895871 containerd[1511]: time="2025-09-09T04:02:49.895829048Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:02:49.896225 containerd[1511]: time="2025-09-09T04:02:49.896190509Z" level=info msg="Ensure that sandbox 73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7 in task-service has been cleanup successfully" Sep 9 04:02:49.898881 containerd[1511]: time="2025-09-09T04:02:49.898104625Z" level=info msg="Ensure that sandbox e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b in task-service has been cleanup successfully" Sep 9 04:02:49.907005 kubelet[2688]: I0909 04:02:49.906967 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:02:49.909207 containerd[1511]: time="2025-09-09T04:02:49.908434674Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:02:49.909207 containerd[1511]: time="2025-09-09T04:02:49.908730454Z" level=info msg="Ensure that sandbox 031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d in task-service has been cleanup successfully" Sep 9 04:02:49.916830 kubelet[2688]: I0909 04:02:49.916792 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:02:49.920786 containerd[1511]: time="2025-09-09T04:02:49.920575161Z" level=info msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" Sep 9 04:02:49.924055 containerd[1511]: time="2025-09-09T04:02:49.923880193Z" level=info msg="Ensure that sandbox e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51 in task-service has been cleanup successfully" Sep 9 04:02:49.940007 kubelet[2688]: I0909 04:02:49.939930 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:02:49.945221 containerd[1511]: time="2025-09-09T04:02:49.944464818Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:02:49.960648 containerd[1511]: time="2025-09-09T04:02:49.960583820Z" level=info msg="Ensure that sandbox 6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6 in task-service has been cleanup successfully" Sep 9 04:02:50.094300 containerd[1511]: time="2025-09-09T04:02:50.089956795Z" level=error msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" failed" error="failed to destroy network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.097097 kubelet[2688]: E0909 04:02:50.090412 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:02:50.097097 kubelet[2688]: E0909 04:02:50.090512 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d"} Sep 9 04:02:50.097097 kubelet[2688]: E0909 04:02:50.090646 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60ea252b-bb65-4eeb-baac-a9493773063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.097097 kubelet[2688]: E0909 04:02:50.090692 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60ea252b-bb65-4eeb-baac-a9493773063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gx7p8" podUID="60ea252b-bb65-4eeb-baac-a9493773063e" Sep 9 04:02:50.100700 containerd[1511]: time="2025-09-09T04:02:50.100646263Z" level=error msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" failed" error="failed to destroy network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.101213 kubelet[2688]: E0909 04:02:50.101154 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:02:50.101877 kubelet[2688]: E0909 04:02:50.101825 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d"} Sep 9 04:02:50.102205 kubelet[2688]: E0909 04:02:50.102053 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.102673 kubelet[2688]: E0909 04:02:50.102406 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6m2tz" podUID="485695d1-af74-4c84-bc1e-c3693d7e6d5c" Sep 9 04:02:50.212989 containerd[1511]: time="2025-09-09T04:02:50.210500989Z" level=error msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" failed" error="failed to destroy network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.223510 containerd[1511]: time="2025-09-09T04:02:50.223447153Z" level=error msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" failed" error="failed to destroy network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.224333 kubelet[2688]: E0909 04:02:50.224042 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:02:50.224333 kubelet[2688]: E0909 04:02:50.224157 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a"} Sep 9 04:02:50.224333 kubelet[2688]: E0909 04:02:50.224221 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e1ceb73-abd9-444a-9955-f6d015b27503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.224333 kubelet[2688]: E0909 04:02:50.224265 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e1ceb73-abd9-444a-9955-f6d015b27503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" podUID="2e1ceb73-abd9-444a-9955-f6d015b27503" Sep 9 04:02:50.224836 kubelet[2688]: E0909 04:02:50.224145 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:02:50.224836 kubelet[2688]: E0909 04:02:50.224327 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6"} Sep 9 04:02:50.224836 kubelet[2688]: E0909 04:02:50.224535 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.224836 kubelet[2688]: E0909 04:02:50.224573 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:02:50.232100 containerd[1511]: time="2025-09-09T04:02:50.231513999Z" level=error msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" failed" error="failed to destroy network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.232425 kubelet[2688]: E0909 04:02:50.232059 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:02:50.232425 kubelet[2688]: E0909 04:02:50.232109 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7"} Sep 9 04:02:50.232425 kubelet[2688]: E0909 04:02:50.232182 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.232425 kubelet[2688]: E0909 04:02:50.232218 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cf9bd4f5f-khjkv" podUID="3fe8fd08-d368-4dc2-854d-3e82426c7226" Sep 9 04:02:50.236763 containerd[1511]: time="2025-09-09T04:02:50.236240470Z" level=error msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" failed" error="failed to destroy network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.236862 kubelet[2688]: E0909 04:02:50.236555 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:02:50.236862 kubelet[2688]: E0909 04:02:50.236617 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51"} Sep 9 04:02:50.236862 kubelet[2688]: E0909 04:02:50.236668 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9173aa2-a083-4753-8a52-dc5c6feaca7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.236862 kubelet[2688]: E0909 04:02:50.236705 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9173aa2-a083-4753-8a52-dc5c6feaca7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" podUID="e9173aa2-a083-4753-8a52-dc5c6feaca7e" Sep 9 04:02:50.249180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51-shm.mount: Deactivated successfully. Sep 9 04:02:50.250659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7-shm.mount: Deactivated successfully. Sep 9 04:02:50.252597 kubelet[2688]: E0909 04:02:50.251316 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:02:50.252711 containerd[1511]: time="2025-09-09T04:02:50.251011265Z" level=error msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" failed" error="failed to destroy network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.251505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d-shm.mount: Deactivated successfully. Sep 9 04:02:50.254221 kubelet[2688]: E0909 04:02:50.253451 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b"} Sep 9 04:02:50.254221 kubelet[2688]: E0909 04:02:50.253520 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77d1e297-6db0-4528-90d8-7bdccecd3fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.254221 kubelet[2688]: E0909 04:02:50.253569 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77d1e297-6db0-4528-90d8-7bdccecd3fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tdpm2" podUID="77d1e297-6db0-4528-90d8-7bdccecd3fb8" Sep 9 04:02:50.251660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a-shm.mount: Deactivated successfully. Sep 9 04:02:50.255959 containerd[1511]: time="2025-09-09T04:02:50.255192495Z" level=error msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" failed" error="failed to destroy network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:02:50.256036 kubelet[2688]: E0909 04:02:50.255498 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:02:50.256036 kubelet[2688]: E0909 04:02:50.255551 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821"} Sep 9 04:02:50.256036 kubelet[2688]: E0909 04:02:50.255613 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4164c1d5-1085-4008-9d19-95f326c5d9e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:02:50.256036 kubelet[2688]: E0909 04:02:50.255741 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4164c1d5-1085-4008-9d19-95f326c5d9e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" podUID="4164c1d5-1085-4008-9d19-95f326c5d9e7" Sep 9 04:03:00.606445 containerd[1511]: time="2025-09-09T04:03:00.605784068Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:03:00.755854 containerd[1511]: time="2025-09-09T04:03:00.755686331Z" level=error msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" failed" error="failed to destroy network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:00.756966 kubelet[2688]: E0909 04:03:00.756317 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:03:00.756966 kubelet[2688]: E0909 04:03:00.756494 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d"} Sep 9 04:03:00.756966 kubelet[2688]: E0909 04:03:00.756583 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:00.756966 kubelet[2688]: E0909 04:03:00.756651 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"485695d1-af74-4c84-bc1e-c3693d7e6d5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6m2tz" podUID="485695d1-af74-4c84-bc1e-c3693d7e6d5c" Sep 9 04:03:01.599738 containerd[1511]: time="2025-09-09T04:03:01.598021021Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:03:01.604108 containerd[1511]: time="2025-09-09T04:03:01.603415806Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:03:01.605568 containerd[1511]: time="2025-09-09T04:03:01.605428898Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:03:01.608972 containerd[1511]: time="2025-09-09T04:03:01.608938062Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:03:01.800226 containerd[1511]: time="2025-09-09T04:03:01.799924752Z" level=error msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" failed" error="failed to destroy network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:01.804357 kubelet[2688]: E0909 04:03:01.804099 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:03:01.804357 kubelet[2688]: E0909 04:03:01.804276 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821"} Sep 9 04:03:01.805231 kubelet[2688]: E0909 04:03:01.804412 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4164c1d5-1085-4008-9d19-95f326c5d9e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:01.805231 kubelet[2688]: E0909 04:03:01.804498 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4164c1d5-1085-4008-9d19-95f326c5d9e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" podUID="4164c1d5-1085-4008-9d19-95f326c5d9e7" Sep 9 04:03:01.807665 containerd[1511]: time="2025-09-09T04:03:01.807624596Z" level=error msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" failed" error="failed to destroy network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:01.808244 kubelet[2688]: E0909 04:03:01.808020 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:03:01.808244 kubelet[2688]: E0909 04:03:01.808085 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6"} Sep 9 04:03:01.808244 kubelet[2688]: E0909 04:03:01.808131 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:01.808244 kubelet[2688]: E0909 04:03:01.808195 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f3923f8-1ebf-4579-9a05-a6111fc5a148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79jrx" podUID="4f3923f8-1ebf-4579-9a05-a6111fc5a148" Sep 9 04:03:01.818357 containerd[1511]: time="2025-09-09T04:03:01.817722551Z" level=error msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" failed" error="failed to destroy network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:01.818490 kubelet[2688]: E0909 04:03:01.818067 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:01.818490 kubelet[2688]: E0909 04:03:01.818121 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7"} Sep 9 04:03:01.818490 kubelet[2688]: E0909 04:03:01.818162 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:01.818490 kubelet[2688]: E0909 04:03:01.818190 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cf9bd4f5f-khjkv" podUID="3fe8fd08-d368-4dc2-854d-3e82426c7226" Sep 9 04:03:01.823941 containerd[1511]: time="2025-09-09T04:03:01.822651902Z" level=error msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" failed" error="failed to destroy network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:01.824072 kubelet[2688]: E0909 04:03:01.822837 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:03:01.824072 kubelet[2688]: E0909 04:03:01.822915 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d"} Sep 9 04:03:01.824072 kubelet[2688]: E0909 04:03:01.822963 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60ea252b-bb65-4eeb-baac-a9493773063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:01.824072 kubelet[2688]: E0909 04:03:01.823000 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60ea252b-bb65-4eeb-baac-a9493773063e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gx7p8" podUID="60ea252b-bb65-4eeb-baac-a9493773063e" Sep 9 04:03:02.595839 containerd[1511]: time="2025-09-09T04:03:02.594946680Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:03:02.714795 containerd[1511]: time="2025-09-09T04:03:02.714558930Z" level=error msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" failed" error="failed to destroy network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:03:02.715771 kubelet[2688]: E0909 04:03:02.715533 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:03:02.715771 kubelet[2688]: E0909 04:03:02.715621 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a"} Sep 9 04:03:02.715771 kubelet[2688]: E0909 04:03:02.715670 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e1ceb73-abd9-444a-9955-f6d015b27503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 04:03:02.716083 kubelet[2688]: E0909 04:03:02.715712 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e1ceb73-abd9-444a-9955-f6d015b27503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" podUID="2e1ceb73-abd9-444a-9955-f6d015b27503" Sep 9 04:03:03.535860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556422262.mount: Deactivated successfully. Sep 9 04:03:03.646282 containerd[1511]: time="2025-09-09T04:03:03.646165086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 04:03:03.648530 containerd[1511]: time="2025-09-09T04:03:03.647383515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:03.679407 containerd[1511]: time="2025-09-09T04:03:03.678813833Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:03.681391 containerd[1511]: time="2025-09-09T04:03:03.680913753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:03.683503 containerd[1511]: time="2025-09-09T04:03:03.683450435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 14.803284445s" Sep 9 04:03:03.683615 containerd[1511]: time="2025-09-09T04:03:03.683520024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 04:03:03.766680 containerd[1511]: time="2025-09-09T04:03:03.766252438Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 04:03:03.817449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519047670.mount: Deactivated successfully. Sep 9 04:03:03.824978 containerd[1511]: time="2025-09-09T04:03:03.824883664Z" level=info msg="CreateContainer within sandbox \"6f904bfca5da62703e3fc9c0502b53fe0fa0dda3bcda7406d31ae1499c84abbf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1762fc6436067a3010e536b7e41fd44a8da6af25eae8ded4f83712d5d4a8cfbf\"" Sep 9 04:03:03.829142 containerd[1511]: time="2025-09-09T04:03:03.829070893Z" level=info msg="StartContainer for \"1762fc6436067a3010e536b7e41fd44a8da6af25eae8ded4f83712d5d4a8cfbf\"" Sep 9 04:03:04.050114 systemd[1]: Started cri-containerd-1762fc6436067a3010e536b7e41fd44a8da6af25eae8ded4f83712d5d4a8cfbf.scope - libcontainer container 1762fc6436067a3010e536b7e41fd44a8da6af25eae8ded4f83712d5d4a8cfbf. Sep 9 04:03:04.133243 containerd[1511]: time="2025-09-09T04:03:04.133063851Z" level=info msg="StartContainer for \"1762fc6436067a3010e536b7e41fd44a8da6af25eae8ded4f83712d5d4a8cfbf\" returns successfully" Sep 9 04:03:04.315080 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 04:03:04.318445 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 04:03:04.597318 containerd[1511]: time="2025-09-09T04:03:04.597252224Z" level=info msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" Sep 9 04:03:04.627976 containerd[1511]: time="2025-09-09T04:03:04.627083233Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:03:05.130296 kubelet[2688]: I0909 04:03:05.122039 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h982s" podStartSLOduration=2.458232302 podStartE2EDuration="33.100511336s" podCreationTimestamp="2025-09-09 04:02:32 +0000 UTC" firstStartedPulling="2025-09-09 04:02:33.066671465 +0000 UTC m=+24.877312370" lastFinishedPulling="2025-09-09 04:03:03.70895049 +0000 UTC m=+55.519591404" observedRunningTime="2025-09-09 04:03:05.095400833 +0000 UTC m=+56.906041778" watchObservedRunningTime="2025-09-09 04:03:05.100511336 +0000 UTC m=+56.911152250" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.835 [INFO][3991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.836 [INFO][3991] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" iface="eth0" netns="/var/run/netns/cni-71da3d50-4b37-7efc-1ec1-12b978acec56" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.839 [INFO][3991] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" iface="eth0" netns="/var/run/netns/cni-71da3d50-4b37-7efc-1ec1-12b978acec56" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.840 [INFO][3991] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" iface="eth0" netns="/var/run/netns/cni-71da3d50-4b37-7efc-1ec1-12b978acec56" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.840 [INFO][3991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:04.840 [INFO][3991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.124 [INFO][4007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.127 [INFO][4007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.127 [INFO][4007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.172 [WARNING][4007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.172 [INFO][4007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.177 [INFO][4007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:05.195385 containerd[1511]: 2025-09-09 04:03:05.185 [INFO][3991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:05.213166 containerd[1511]: time="2025-09-09T04:03:05.211726251Z" level=info msg="TearDown network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" successfully" Sep 9 04:03:05.213166 containerd[1511]: time="2025-09-09T04:03:05.211804502Z" level=info msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" returns successfully" Sep 9 04:03:05.215778 containerd[1511]: time="2025-09-09T04:03:05.215742384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd869cf-qw87t,Uid:e9173aa2-a083-4753-8a52-dc5c6feaca7e,Namespace:calico-system,Attempt:1,}" Sep 9 04:03:05.217472 systemd[1]: run-netns-cni\x2d71da3d50\x2d4b37\x2d7efc\x2d1ec1\x2d12b978acec56.mount: Deactivated successfully. Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.832 [INFO][3992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.835 [INFO][3992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" iface="eth0" netns="/var/run/netns/cni-4501605b-1e43-1871-648f-dcbbc2e16b4b" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.836 [INFO][3992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" iface="eth0" netns="/var/run/netns/cni-4501605b-1e43-1871-648f-dcbbc2e16b4b" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.839 [INFO][3992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" iface="eth0" netns="/var/run/netns/cni-4501605b-1e43-1871-648f-dcbbc2e16b4b" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.840 [INFO][3992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:04.840 [INFO][3992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.122 [INFO][4008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.129 [INFO][4008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.177 [INFO][4008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.194 [WARNING][4008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.194 [INFO][4008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.199 [INFO][4008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:05.232553 containerd[1511]: 2025-09-09 04:03:05.223 [INFO][3992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:05.235796 containerd[1511]: time="2025-09-09T04:03:05.233774021Z" level=info msg="TearDown network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" successfully" Sep 9 04:03:05.235796 containerd[1511]: time="2025-09-09T04:03:05.233823157Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" returns successfully" Sep 9 04:03:05.244325 systemd[1]: run-netns-cni\x2d4501605b\x2d1e43\x2d1871\x2d648f\x2ddcbbc2e16b4b.mount: Deactivated successfully. Sep 9 04:03:05.459821 kubelet[2688]: I0909 04:03:05.457833 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frp7n\" (UniqueName: \"kubernetes.io/projected/3fe8fd08-d368-4dc2-854d-3e82426c7226-kube-api-access-frp7n\") pod \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " Sep 9 04:03:05.459821 kubelet[2688]: I0909 04:03:05.457929 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-backend-key-pair\") pod \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " Sep 9 04:03:05.463061 kubelet[2688]: I0909 04:03:05.463024 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-ca-bundle\") pod \"3fe8fd08-d368-4dc2-854d-3e82426c7226\" (UID: \"3fe8fd08-d368-4dc2-854d-3e82426c7226\") " Sep 9 04:03:05.489565 kubelet[2688]: I0909 04:03:05.488902 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe8fd08-d368-4dc2-854d-3e82426c7226-kube-api-access-frp7n" (OuterVolumeSpecName: "kube-api-access-frp7n") pod "3fe8fd08-d368-4dc2-854d-3e82426c7226" (UID: "3fe8fd08-d368-4dc2-854d-3e82426c7226"). InnerVolumeSpecName "kube-api-access-frp7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 04:03:05.489565 kubelet[2688]: I0909 04:03:05.487890 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3fe8fd08-d368-4dc2-854d-3e82426c7226" (UID: "3fe8fd08-d368-4dc2-854d-3e82426c7226"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 04:03:05.489833 kubelet[2688]: I0909 04:03:05.489602 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3fe8fd08-d368-4dc2-854d-3e82426c7226" (UID: "3fe8fd08-d368-4dc2-854d-3e82426c7226"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 04:03:05.528935 systemd[1]: var-lib-kubelet-pods-3fe8fd08\x2dd368\x2d4dc2\x2d854d\x2d3e82426c7226-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrp7n.mount: Deactivated successfully. Sep 9 04:03:05.529121 systemd[1]: var-lib-kubelet-pods-3fe8fd08\x2dd368\x2d4dc2\x2d854d\x2d3e82426c7226-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 04:03:05.564429 kubelet[2688]: I0909 04:03:05.564109 2688 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-ca-bundle\") on node \"srv-gbnqu.gb1.brightbox.com\" DevicePath \"\"" Sep 9 04:03:05.564429 kubelet[2688]: I0909 04:03:05.564155 2688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frp7n\" (UniqueName: \"kubernetes.io/projected/3fe8fd08-d368-4dc2-854d-3e82426c7226-kube-api-access-frp7n\") on node \"srv-gbnqu.gb1.brightbox.com\" DevicePath \"\"" Sep 9 04:03:05.564429 kubelet[2688]: I0909 04:03:05.564227 2688 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fe8fd08-d368-4dc2-854d-3e82426c7226-whisker-backend-key-pair\") on node \"srv-gbnqu.gb1.brightbox.com\" DevicePath \"\"" Sep 9 04:03:05.571062 systemd-networkd[1432]: cali9f6818e679a: Link UP Sep 9 04:03:05.572922 systemd-networkd[1432]: cali9f6818e679a: Gained carrier Sep 9 04:03:05.594944 containerd[1511]: time="2025-09-09T04:03:05.594662138Z" level=info msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.356 [INFO][4030] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.384 [INFO][4030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0 calico-kube-controllers-6c8cd869cf- calico-system e9173aa2-a083-4753-8a52-dc5c6feaca7e 934 0 2025-09-09 04:02:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c8cd869cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com calico-kube-controllers-6c8cd869cf-qw87t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f6818e679a [] [] }} ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.384 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.462 [INFO][4058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" HandleID="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.462 [INFO][4058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" HandleID="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e610), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"calico-kube-controllers-6c8cd869cf-qw87t", "timestamp":"2025-09-09 04:03:05.462112022 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.462 [INFO][4058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.462 [INFO][4058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.462 [INFO][4058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.482 [INFO][4058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.498 [INFO][4058] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.507 [INFO][4058] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.512 [INFO][4058] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.515 [INFO][4058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.515 [INFO][4058] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.517 [INFO][4058] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.526 [INFO][4058] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.538 [INFO][4058] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.193/26] block=192.168.51.192/26 handle="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.539 [INFO][4058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.193/26] handle="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.539 [INFO][4058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:05.630969 containerd[1511]: 2025-09-09 04:03:05.539 [INFO][4058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.193/26] IPv6=[] ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" HandleID="k8s-pod-network.9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.543 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0", GenerateName:"calico-kube-controllers-6c8cd869cf-", Namespace:"calico-system", SelfLink:"", UID:"e9173aa2-a083-4753-8a52-dc5c6feaca7e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd869cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6c8cd869cf-qw87t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f6818e679a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.543 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.193/32] ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.543 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f6818e679a ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.574 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.576 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0", GenerateName:"calico-kube-controllers-6c8cd869cf-", Namespace:"calico-system", SelfLink:"", UID:"e9173aa2-a083-4753-8a52-dc5c6feaca7e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd869cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d", Pod:"calico-kube-controllers-6c8cd869cf-qw87t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f6818e679a", MAC:"22:c2:6a:5c:74:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:05.637444 containerd[1511]: 2025-09-09 04:03:05.616 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd869cf-qw87t" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:05.682797 containerd[1511]: time="2025-09-09T04:03:05.682247355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:05.686029 containerd[1511]: time="2025-09-09T04:03:05.685438367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:05.686029 containerd[1511]: time="2025-09-09T04:03:05.685469013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:05.686029 containerd[1511]: time="2025-09-09T04:03:05.685641631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:05.745643 systemd[1]: run-containerd-runc-k8s.io-9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d-runc.EUPxGV.mount: Deactivated successfully. Sep 9 04:03:05.769582 systemd[1]: Started cri-containerd-9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d.scope - libcontainer container 9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d. Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.744 [INFO][4086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.754 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" iface="eth0" netns="/var/run/netns/cni-7a69995c-c144-abf2-7d9c-5b049c39864d" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.755 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" iface="eth0" netns="/var/run/netns/cni-7a69995c-c144-abf2-7d9c-5b049c39864d" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.756 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" iface="eth0" netns="/var/run/netns/cni-7a69995c-c144-abf2-7d9c-5b049c39864d" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.756 [INFO][4086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.756 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.821 [INFO][4123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.821 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.821 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.838 [WARNING][4123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.838 [INFO][4123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.840 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:05.845449 containerd[1511]: 2025-09-09 04:03:05.843 [INFO][4086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:05.847424 containerd[1511]: time="2025-09-09T04:03:05.846507009Z" level=info msg="TearDown network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" successfully" Sep 9 04:03:05.847424 containerd[1511]: time="2025-09-09T04:03:05.846550356Z" level=info msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" returns successfully" Sep 9 04:03:05.852681 containerd[1511]: time="2025-09-09T04:03:05.850913644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdpm2,Uid:77d1e297-6db0-4528-90d8-7bdccecd3fb8,Namespace:kube-system,Attempt:1,}" Sep 9 04:03:05.852293 systemd[1]: run-netns-cni\x2d7a69995c\x2dc144\x2dabf2\x2d7d9c\x2d5b049c39864d.mount: Deactivated successfully. Sep 9 04:03:05.935184 containerd[1511]: time="2025-09-09T04:03:05.935130306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd869cf-qw87t,Uid:e9173aa2-a083-4753-8a52-dc5c6feaca7e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d\"" Sep 9 04:03:05.948410 containerd[1511]: time="2025-09-09T04:03:05.948344535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 04:03:06.106227 systemd[1]: Removed slice kubepods-besteffort-pod3fe8fd08_d368_4dc2_854d_3e82426c7226.slice - libcontainer container kubepods-besteffort-pod3fe8fd08_d368_4dc2_854d_3e82426c7226.slice. Sep 9 04:03:06.190663 systemd-networkd[1432]: calif5138a0980c: Link UP Sep 9 04:03:06.191835 systemd-networkd[1432]: calif5138a0980c: Gained carrier Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:05.958 [INFO][4137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:05.984 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0 coredns-7c65d6cfc9- kube-system 77d1e297-6db0-4528-90d8-7bdccecd3fb8 948 0 2025-09-09 04:02:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com coredns-7c65d6cfc9-tdpm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5138a0980c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:05.984 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.032 [INFO][4156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" HandleID="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.032 [INFO][4156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" HandleID="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-tdpm2", "timestamp":"2025-09-09 04:03:06.032697134 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.033 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.033 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.033 [INFO][4156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.044 [INFO][4156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.052 [INFO][4156] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.059 [INFO][4156] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.069 [INFO][4156] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.076 [INFO][4156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.076 [INFO][4156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.083 [INFO][4156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.100 [INFO][4156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.177 [INFO][4156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.194/26] block=192.168.51.192/26 handle="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.177 [INFO][4156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.194/26] handle="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.177 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:06.228444 containerd[1511]: 2025-09-09 04:03:06.177 [INFO][4156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.194/26] IPv6=[] ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" HandleID="k8s-pod-network.37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.182 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"77d1e297-6db0-4528-90d8-7bdccecd3fb8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-tdpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5138a0980c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.182 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.194/32] ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.183 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5138a0980c ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.189 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.190 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"77d1e297-6db0-4528-90d8-7bdccecd3fb8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f", Pod:"coredns-7c65d6cfc9-tdpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5138a0980c", MAC:"8e:6d:4d:f1:ba:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:06.231445 containerd[1511]: 2025-09-09 04:03:06.224 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tdpm2" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:06.290483 containerd[1511]: time="2025-09-09T04:03:06.286212599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:06.290483 containerd[1511]: time="2025-09-09T04:03:06.289412645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:06.290483 containerd[1511]: time="2025-09-09T04:03:06.289437188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:06.290483 containerd[1511]: time="2025-09-09T04:03:06.289672001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:06.345388 systemd[1]: Started cri-containerd-37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f.scope - libcontainer container 37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f. Sep 9 04:03:06.476062 kubelet[2688]: I0909 04:03:06.475743 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07ae687c-f59d-4220-b168-960a83619636-whisker-backend-key-pair\") pod \"whisker-f595bff58-228qc\" (UID: \"07ae687c-f59d-4220-b168-960a83619636\") " pod="calico-system/whisker-f595bff58-228qc" Sep 9 04:03:06.476062 kubelet[2688]: I0909 04:03:06.475848 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ae687c-f59d-4220-b168-960a83619636-whisker-ca-bundle\") pod \"whisker-f595bff58-228qc\" (UID: \"07ae687c-f59d-4220-b168-960a83619636\") " pod="calico-system/whisker-f595bff58-228qc" Sep 9 04:03:06.476062 kubelet[2688]: I0909 04:03:06.475894 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxs5\" (UniqueName: \"kubernetes.io/projected/07ae687c-f59d-4220-b168-960a83619636-kube-api-access-wgxs5\") pod \"whisker-f595bff58-228qc\" (UID: \"07ae687c-f59d-4220-b168-960a83619636\") " pod="calico-system/whisker-f595bff58-228qc" Sep 9 04:03:06.509105 systemd[1]: Created slice kubepods-besteffort-pod07ae687c_f59d_4220_b168_960a83619636.slice - libcontainer container kubepods-besteffort-pod07ae687c_f59d_4220_b168_960a83619636.slice. Sep 9 04:03:06.523037 containerd[1511]: time="2025-09-09T04:03:06.521883165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdpm2,Uid:77d1e297-6db0-4528-90d8-7bdccecd3fb8,Namespace:kube-system,Attempt:1,} returns sandbox id \"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f\"" Sep 9 04:03:06.557439 containerd[1511]: time="2025-09-09T04:03:06.557121237Z" level=info msg="CreateContainer within sandbox \"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:03:06.598953 systemd-networkd[1432]: cali9f6818e679a: Gained IPv6LL Sep 9 04:03:06.666377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400556922.mount: Deactivated successfully. Sep 9 04:03:06.684012 kubelet[2688]: I0909 04:03:06.683572 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe8fd08-d368-4dc2-854d-3e82426c7226" path="/var/lib/kubelet/pods/3fe8fd08-d368-4dc2-854d-3e82426c7226/volumes" Sep 9 04:03:06.701400 containerd[1511]: time="2025-09-09T04:03:06.700752448Z" level=info msg="CreateContainer within sandbox \"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f8d0534771accf3c497102401e52fe43b9927477e2bb77a1c9f574df55483e2\"" Sep 9 04:03:06.703402 containerd[1511]: time="2025-09-09T04:03:06.703121963Z" level=info msg="StartContainer for \"0f8d0534771accf3c497102401e52fe43b9927477e2bb77a1c9f574df55483e2\"" Sep 9 04:03:06.784611 systemd[1]: Started cri-containerd-0f8d0534771accf3c497102401e52fe43b9927477e2bb77a1c9f574df55483e2.scope - libcontainer container 0f8d0534771accf3c497102401e52fe43b9927477e2bb77a1c9f574df55483e2. Sep 9 04:03:06.817196 containerd[1511]: time="2025-09-09T04:03:06.817118732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f595bff58-228qc,Uid:07ae687c-f59d-4220-b168-960a83619636,Namespace:calico-system,Attempt:0,}" Sep 9 04:03:06.878786 containerd[1511]: time="2025-09-09T04:03:06.878736776Z" level=info msg="StartContainer for \"0f8d0534771accf3c497102401e52fe43b9927477e2bb77a1c9f574df55483e2\" returns successfully" Sep 9 04:03:07.226465 systemd-networkd[1432]: cali6851b966e3c: Link UP Sep 9 04:03:07.231209 systemd-networkd[1432]: cali6851b966e3c: Gained carrier Sep 9 04:03:07.247867 kubelet[2688]: I0909 04:03:07.247765 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tdpm2" podStartSLOduration=54.247730734 podStartE2EDuration="54.247730734s" podCreationTimestamp="2025-09-09 04:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:03:07.14016217 +0000 UTC m=+58.950803089" watchObservedRunningTime="2025-09-09 04:03:07.247730734 +0000 UTC m=+59.058371658" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:06.948 [INFO][4295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:06.977 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0 whisker-f595bff58- calico-system 07ae687c-f59d-4220-b168-960a83619636 969 0 2025-09-09 04:03:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f595bff58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com whisker-f595bff58-228qc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6851b966e3c [] [] }} ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:06.977 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.109 [INFO][4328] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" HandleID="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.109 [INFO][4328] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" HandleID="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"whisker-f595bff58-228qc", "timestamp":"2025-09-09 04:03:07.109020792 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.109 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.109 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.109 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.143 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.161 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.170 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.173 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.178 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.178 [INFO][4328] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.185 [INFO][4328] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891 Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.196 [INFO][4328] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.210 [INFO][4328] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.195/26] block=192.168.51.192/26 handle="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.210 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.195/26] handle="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.210 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:07.261711 containerd[1511]: 2025-09-09 04:03:07.210 [INFO][4328] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.195/26] IPv6=[] ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" HandleID="k8s-pod-network.6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.215 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0", GenerateName:"whisker-f595bff58-", Namespace:"calico-system", SelfLink:"", UID:"07ae687c-f59d-4220-b168-960a83619636", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f595bff58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"whisker-f595bff58-228qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6851b966e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.216 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.195/32] ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.216 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6851b966e3c ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.226 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.227 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0", GenerateName:"whisker-f595bff58-", Namespace:"calico-system", SelfLink:"", UID:"07ae687c-f59d-4220-b168-960a83619636", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f595bff58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891", Pod:"whisker-f595bff58-228qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6851b966e3c", MAC:"06:b4:96:e1:fd:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:07.266358 containerd[1511]: 2025-09-09 04:03:07.251 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891" Namespace="calico-system" Pod="whisker-f595bff58-228qc" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--f595bff58--228qc-eth0" Sep 9 04:03:07.333954 containerd[1511]: time="2025-09-09T04:03:07.332771311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:07.333954 containerd[1511]: time="2025-09-09T04:03:07.332911153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:07.333954 containerd[1511]: time="2025-09-09T04:03:07.332943731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:07.333954 containerd[1511]: time="2025-09-09T04:03:07.333123028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:07.385200 systemd[1]: Started cri-containerd-6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891.scope - libcontainer container 6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891. Sep 9 04:03:07.538739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479799587.mount: Deactivated successfully. Sep 9 04:03:07.594961 containerd[1511]: time="2025-09-09T04:03:07.594829008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f595bff58-228qc,Uid:07ae687c-f59d-4220-b168-960a83619636,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891\"" Sep 9 04:03:08.003536 systemd-networkd[1432]: calif5138a0980c: Gained IPv6LL Sep 9 04:03:08.346411 kernel: bpftool[4464]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 04:03:08.389375 systemd-networkd[1432]: cali6851b966e3c: Gained IPv6LL Sep 9 04:03:08.532583 containerd[1511]: time="2025-09-09T04:03:08.532507166Z" level=info msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.688 [WARNING][4473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0", GenerateName:"calico-kube-controllers-6c8cd869cf-", Namespace:"calico-system", SelfLink:"", UID:"e9173aa2-a083-4753-8a52-dc5c6feaca7e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd869cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d", Pod:"calico-kube-controllers-6c8cd869cf-qw87t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f6818e679a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.693 [INFO][4473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.693 [INFO][4473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" iface="eth0" netns="" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.693 [INFO][4473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.693 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.797 [INFO][4485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.798 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.798 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.819 [WARNING][4485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.819 [INFO][4485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.826 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:08.838857 containerd[1511]: 2025-09-09 04:03:08.829 [INFO][4473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:08.838857 containerd[1511]: time="2025-09-09T04:03:08.838095931Z" level=info msg="TearDown network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" successfully" Sep 9 04:03:08.838857 containerd[1511]: time="2025-09-09T04:03:08.838217463Z" level=info msg="StopPodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" returns successfully" Sep 9 04:03:08.848016 containerd[1511]: time="2025-09-09T04:03:08.843419681Z" level=info msg="RemovePodSandbox for \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" Sep 9 04:03:08.848016 containerd[1511]: time="2025-09-09T04:03:08.843470085Z" level=info msg="Forcibly stopping sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\"" Sep 9 04:03:08.970135 systemd-networkd[1432]: vxlan.calico: Link UP Sep 9 04:03:08.970150 systemd-networkd[1432]: vxlan.calico: Gained carrier Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:08.943 [WARNING][4512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0", GenerateName:"calico-kube-controllers-6c8cd869cf-", Namespace:"calico-system", SelfLink:"", UID:"e9173aa2-a083-4753-8a52-dc5c6feaca7e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd869cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d", Pod:"calico-kube-controllers-6c8cd869cf-qw87t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f6818e679a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:08.943 [INFO][4512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:08.943 [INFO][4512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" iface="eth0" netns="" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:08.944 [INFO][4512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:08.944 [INFO][4512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.072 [INFO][4520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.078 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.078 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.100 [WARNING][4520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.100 [INFO][4520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" HandleID="k8s-pod-network.e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--kube--controllers--6c8cd869cf--qw87t-eth0" Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.104 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:09.119845 containerd[1511]: 2025-09-09 04:03:09.112 [INFO][4512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51" Sep 9 04:03:09.119845 containerd[1511]: time="2025-09-09T04:03:09.119269689Z" level=info msg="TearDown network for sandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" successfully" Sep 9 04:03:09.160340 containerd[1511]: time="2025-09-09T04:03:09.160280552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:03:09.160627 containerd[1511]: time="2025-09-09T04:03:09.160594469Z" level=info msg="RemovePodSandbox \"e876f68cfe26399352ebf47e06f20f70ac215d02b20a39cbe7f1545e69249b51\" returns successfully" Sep 9 04:03:09.163826 containerd[1511]: time="2025-09-09T04:03:09.163796319Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.324 [WARNING][4565] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.324 [INFO][4565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.324 [INFO][4565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" iface="eth0" netns="" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.324 [INFO][4565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.324 [INFO][4565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.373 [INFO][4575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.373 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.374 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.396 [WARNING][4575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.399 [INFO][4575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.405 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:09.409760 containerd[1511]: 2025-09-09 04:03:09.407 [INFO][4565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.409760 containerd[1511]: time="2025-09-09T04:03:09.409710210Z" level=info msg="TearDown network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" successfully" Sep 9 04:03:09.413153 containerd[1511]: time="2025-09-09T04:03:09.409756013Z" level=info msg="StopPodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" returns successfully" Sep 9 04:03:09.413153 containerd[1511]: time="2025-09-09T04:03:09.411173436Z" level=info msg="RemovePodSandbox for \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:03:09.413153 containerd[1511]: time="2025-09-09T04:03:09.411212466Z" level=info msg="Forcibly stopping sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\"" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.507 [WARNING][4589] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.508 [INFO][4589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.508 [INFO][4589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" iface="eth0" netns="" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.508 [INFO][4589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.508 [INFO][4589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.584 [INFO][4607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.585 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.585 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.601 [WARNING][4607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.601 [INFO][4607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" HandleID="k8s-pod-network.73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Workload="srv--gbnqu.gb1.brightbox.com-k8s-whisker--6cf9bd4f5f--khjkv-eth0" Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.612 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:09.619667 containerd[1511]: 2025-09-09 04:03:09.616 [INFO][4589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7" Sep 9 04:03:09.662096 containerd[1511]: time="2025-09-09T04:03:09.620680542Z" level=info msg="TearDown network for sandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" successfully" Sep 9 04:03:09.683989 containerd[1511]: time="2025-09-09T04:03:09.682698769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:03:09.683989 containerd[1511]: time="2025-09-09T04:03:09.682807787Z" level=info msg="RemovePodSandbox \"73576f893161998f389f73532ca774242e05fe9b8961cae9f389fd1df7f373c7\" returns successfully" Sep 9 04:03:09.688933 containerd[1511]: time="2025-09-09T04:03:09.688895116Z" level=info msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.810 [WARNING][4654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"77d1e297-6db0-4528-90d8-7bdccecd3fb8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f", Pod:"coredns-7c65d6cfc9-tdpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5138a0980c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.811 [INFO][4654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.811 [INFO][4654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" iface="eth0" netns="" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.811 [INFO][4654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.811 [INFO][4654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.871 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.872 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.872 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.882 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.882 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.885 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:09.891632 containerd[1511]: 2025-09-09 04:03:09.888 [INFO][4654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:09.893164 containerd[1511]: time="2025-09-09T04:03:09.892490605Z" level=info msg="TearDown network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" successfully" Sep 9 04:03:09.893164 containerd[1511]: time="2025-09-09T04:03:09.892528112Z" level=info msg="StopPodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" returns successfully" Sep 9 04:03:09.894041 containerd[1511]: time="2025-09-09T04:03:09.894007707Z" level=info msg="RemovePodSandbox for \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" Sep 9 04:03:09.894823 containerd[1511]: time="2025-09-09T04:03:09.894433328Z" level=info msg="Forcibly stopping sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\"" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:09.979 [WARNING][4675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"77d1e297-6db0-4528-90d8-7bdccecd3fb8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"37f830a3ace7728914fbc66db42659508a5eb608880b80120327d08508a3796f", Pod:"coredns-7c65d6cfc9-tdpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5138a0980c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:09.979 [INFO][4675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:09.979 [INFO][4675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" iface="eth0" netns="" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:09.979 [INFO][4675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:09.979 [INFO][4675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.039 [INFO][4682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.039 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.039 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.054 [WARNING][4682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.054 [INFO][4682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" HandleID="k8s-pod-network.e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--tdpm2-eth0" Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.060 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:10.068572 containerd[1511]: 2025-09-09 04:03:10.064 [INFO][4675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b" Sep 9 04:03:10.071382 containerd[1511]: time="2025-09-09T04:03:10.069518629Z" level=info msg="TearDown network for sandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" successfully" Sep 9 04:03:10.083021 containerd[1511]: time="2025-09-09T04:03:10.082972322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:03:10.084863 containerd[1511]: time="2025-09-09T04:03:10.084817959Z" level=info msg="RemovePodSandbox \"e039ea8a309c3f6e762e793a3deb9b55288525d8a7760233ee29b2d7de6ea03b\" returns successfully" Sep 9 04:03:10.691945 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Sep 9 04:03:11.498473 containerd[1511]: time="2025-09-09T04:03:11.498311524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:11.500637 containerd[1511]: time="2025-09-09T04:03:11.500114478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 04:03:11.500897 containerd[1511]: time="2025-09-09T04:03:11.500818894Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:11.509147 containerd[1511]: time="2025-09-09T04:03:11.509068748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:11.510599 containerd[1511]: time="2025-09-09T04:03:11.510537932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.56199094s" Sep 9 04:03:11.510704 containerd[1511]: time="2025-09-09T04:03:11.510623575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 04:03:11.514038 containerd[1511]: time="2025-09-09T04:03:11.513824792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 04:03:11.538803 containerd[1511]: time="2025-09-09T04:03:11.538422700Z" level=info msg="CreateContainer within sandbox \"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 04:03:11.557982 containerd[1511]: time="2025-09-09T04:03:11.557923476Z" level=info msg="CreateContainer within sandbox \"9c1b3f1fbfb097ae0213d0a1afc1f65c8b45f880945a99f524dfbe5154a5135d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90\"" Sep 9 04:03:11.565310 containerd[1511]: time="2025-09-09T04:03:11.561589199Z" level=info msg="StartContainer for \"4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90\"" Sep 9 04:03:11.650713 systemd[1]: Started cri-containerd-4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90.scope - libcontainer container 4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90. Sep 9 04:03:11.732942 containerd[1511]: time="2025-09-09T04:03:11.732809609Z" level=info msg="StartContainer for \"4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90\" returns successfully" Sep 9 04:03:12.165514 kubelet[2688]: I0909 04:03:12.161075 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c8cd869cf-qw87t" podStartSLOduration=34.594249936 podStartE2EDuration="40.16102487s" podCreationTimestamp="2025-09-09 04:02:32 +0000 UTC" firstStartedPulling="2025-09-09 04:03:05.945463484 +0000 UTC m=+57.756104397" lastFinishedPulling="2025-09-09 04:03:11.512238418 +0000 UTC m=+63.322879331" observedRunningTime="2025-09-09 04:03:12.157630356 +0000 UTC m=+63.968271278" watchObservedRunningTime="2025-09-09 04:03:12.16102487 +0000 UTC m=+63.971665782" Sep 9 04:03:12.258477 update_engine[1489]: I20250909 04:03:12.258293 1489 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 04:03:12.258477 update_engine[1489]: I20250909 04:03:12.258481 1489 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 04:03:12.266122 update_engine[1489]: I20250909 04:03:12.266073 1489 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 04:03:12.267278 update_engine[1489]: I20250909 04:03:12.267241 1489 omaha_request_params.cc:62] Current group set to lts Sep 9 04:03:12.268162 update_engine[1489]: I20250909 04:03:12.268121 1489 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 04:03:12.268162 update_engine[1489]: I20250909 04:03:12.268151 1489 update_attempter.cc:643] Scheduling an action processor start. Sep 9 04:03:12.268294 update_engine[1489]: I20250909 04:03:12.268191 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 04:03:12.268706 update_engine[1489]: I20250909 04:03:12.268285 1489 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 04:03:12.269393 update_engine[1489]: I20250909 04:03:12.268812 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 04:03:12.269393 update_engine[1489]: I20250909 04:03:12.268844 1489 omaha_request_action.cc:272] Request: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: Sep 9 04:03:12.269393 update_engine[1489]: I20250909 04:03:12.268882 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 04:03:12.302270 update_engine[1489]: I20250909 04:03:12.302112 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 04:03:12.303158 update_engine[1489]: I20250909 04:03:12.303035 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 04:03:12.312976 update_engine[1489]: E20250909 04:03:12.312783 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 04:03:12.312976 update_engine[1489]: I20250909 04:03:12.312915 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 04:03:12.319508 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 04:03:12.529790 systemd[1]: run-containerd-runc-k8s.io-4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90-runc.oPH46l.mount: Deactivated successfully. Sep 9 04:03:13.484140 containerd[1511]: time="2025-09-09T04:03:13.483903614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:13.487590 containerd[1511]: time="2025-09-09T04:03:13.487515619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 04:03:13.491623 containerd[1511]: time="2025-09-09T04:03:13.491517528Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:13.494743 containerd[1511]: time="2025-09-09T04:03:13.494667219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:13.496393 containerd[1511]: time="2025-09-09T04:03:13.496201811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.98218115s" Sep 9 04:03:13.496393 containerd[1511]: time="2025-09-09T04:03:13.496259008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 04:03:13.501197 containerd[1511]: time="2025-09-09T04:03:13.501146570Z" level=info msg="CreateContainer within sandbox \"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 04:03:13.520998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998525846.mount: Deactivated successfully. Sep 9 04:03:13.521755 containerd[1511]: time="2025-09-09T04:03:13.521312094Z" level=info msg="CreateContainer within sandbox \"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf42062cb05f9eab8872a26f4bf51ec37c0adc5a97f136ab618b454482dfad18\"" Sep 9 04:03:13.524242 containerd[1511]: time="2025-09-09T04:03:13.523987672Z" level=info msg="StartContainer for \"bf42062cb05f9eab8872a26f4bf51ec37c0adc5a97f136ab618b454482dfad18\"" Sep 9 04:03:13.591579 systemd[1]: Started cri-containerd-bf42062cb05f9eab8872a26f4bf51ec37c0adc5a97f136ab618b454482dfad18.scope - libcontainer container bf42062cb05f9eab8872a26f4bf51ec37c0adc5a97f136ab618b454482dfad18. Sep 9 04:03:13.600807 containerd[1511]: time="2025-09-09T04:03:13.600478489Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:03:13.694878 containerd[1511]: time="2025-09-09T04:03:13.694791018Z" level=info msg="StartContainer for \"bf42062cb05f9eab8872a26f4bf51ec37c0adc5a97f136ab618b454482dfad18\" returns successfully" Sep 9 04:03:13.699163 containerd[1511]: time="2025-09-09T04:03:13.699127806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.718 [INFO][4798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.725 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" iface="eth0" netns="/var/run/netns/cni-ee25b339-5499-3560-2ae7-781ac2d9f45c" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.726 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" iface="eth0" netns="/var/run/netns/cni-ee25b339-5499-3560-2ae7-781ac2d9f45c" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.726 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" iface="eth0" netns="/var/run/netns/cni-ee25b339-5499-3560-2ae7-781ac2d9f45c" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.726 [INFO][4798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.726 [INFO][4798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.780 [INFO][4812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.782 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.782 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.798 [WARNING][4812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.798 [INFO][4812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.802 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:13.807535 containerd[1511]: 2025-09-09 04:03:13.804 [INFO][4798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:03:13.809649 containerd[1511]: time="2025-09-09T04:03:13.809588362Z" level=info msg="TearDown network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" successfully" Sep 9 04:03:13.809649 containerd[1511]: time="2025-09-09T04:03:13.809634058Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" returns successfully" Sep 9 04:03:13.813246 containerd[1511]: time="2025-09-09T04:03:13.812796012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79jrx,Uid:4f3923f8-1ebf-4579-9a05-a6111fc5a148,Namespace:calico-system,Attempt:1,}" Sep 9 04:03:13.813223 systemd[1]: run-netns-cni\x2dee25b339\x2d5499\x2d3560\x2d2ae7\x2d781ac2d9f45c.mount: Deactivated successfully. Sep 9 04:03:14.077957 systemd-networkd[1432]: calid590fdc21db: Link UP Sep 9 04:03:14.081097 systemd-networkd[1432]: calid590fdc21db: Gained carrier Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.920 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0 csi-node-driver- calico-system 4f3923f8-1ebf-4579-9a05-a6111fc5a148 1016 0 2025-09-09 04:02:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com csi-node-driver-79jrx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid590fdc21db [] [] }} ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.920 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.978 [INFO][4835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" HandleID="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.978 [INFO][4835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" HandleID="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5740), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"csi-node-driver-79jrx", "timestamp":"2025-09-09 04:03:13.978322579 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.978 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.978 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.979 [INFO][4835] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:13.994 [INFO][4835] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.002 [INFO][4835] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.016 [INFO][4835] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.022 [INFO][4835] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.026 [INFO][4835] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.026 [INFO][4835] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.028 [INFO][4835] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.048 [INFO][4835] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.064 [INFO][4835] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.196/26] block=192.168.51.192/26 handle="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.064 [INFO][4835] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.196/26] handle="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.064 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:14.106550 containerd[1511]: 2025-09-09 04:03:14.064 [INFO][4835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.196/26] IPv6=[] ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" HandleID="k8s-pod-network.953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.068 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f3923f8-1ebf-4579-9a05-a6111fc5a148", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-79jrx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid590fdc21db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.068 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.196/32] ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.068 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid590fdc21db ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.074 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.074 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f3923f8-1ebf-4579-9a05-a6111fc5a148", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad", Pod:"csi-node-driver-79jrx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid590fdc21db", MAC:"e6:9a:42:79:d2:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:14.109478 containerd[1511]: 2025-09-09 04:03:14.096 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad" Namespace="calico-system" Pod="csi-node-driver-79jrx" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:03:14.164808 containerd[1511]: time="2025-09-09T04:03:14.164615596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:14.165006 containerd[1511]: time="2025-09-09T04:03:14.164833350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:14.165006 containerd[1511]: time="2025-09-09T04:03:14.164938566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:14.165401 containerd[1511]: time="2025-09-09T04:03:14.165205621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:14.225671 systemd[1]: Started cri-containerd-953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad.scope - libcontainer container 953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad. Sep 9 04:03:14.289500 containerd[1511]: time="2025-09-09T04:03:14.289442011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79jrx,Uid:4f3923f8-1ebf-4579-9a05-a6111fc5a148,Namespace:calico-system,Attempt:1,} returns sandbox id \"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad\"" Sep 9 04:03:14.594858 containerd[1511]: time="2025-09-09T04:03:14.593891398Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:03:14.597756 containerd[1511]: time="2025-09-09T04:03:14.596917779Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.695 [INFO][4909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.695 [INFO][4909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" iface="eth0" netns="/var/run/netns/cni-e6b243a4-8211-2b23-10bf-2ece3dc7860a" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.695 [INFO][4909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" iface="eth0" netns="/var/run/netns/cni-e6b243a4-8211-2b23-10bf-2ece3dc7860a" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.697 [INFO][4909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" iface="eth0" netns="/var/run/netns/cni-e6b243a4-8211-2b23-10bf-2ece3dc7860a" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.698 [INFO][4909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.698 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.777 [INFO][4930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.778 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.778 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.794 [WARNING][4930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.794 [INFO][4930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.799 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:14.809353 containerd[1511]: 2025-09-09 04:03:14.805 [INFO][4909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:03:14.813439 containerd[1511]: time="2025-09-09T04:03:14.812484456Z" level=info msg="TearDown network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" successfully" Sep 9 04:03:14.813439 containerd[1511]: time="2025-09-09T04:03:14.812544816Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" returns successfully" Sep 9 04:03:14.818346 systemd[1]: run-netns-cni\x2de6b243a4\x2d8211\x2d2b23\x2d10bf\x2d2ece3dc7860a.mount: Deactivated successfully. Sep 9 04:03:14.819900 containerd[1511]: time="2025-09-09T04:03:14.818578353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-shtjq,Uid:4164c1d5-1085-4008-9d19-95f326c5d9e7,Namespace:calico-apiserver,Attempt:1,}" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.709 [INFO][4913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.709 [INFO][4913] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" iface="eth0" netns="/var/run/netns/cni-4a248c32-e558-3777-d515-f014ed775bfa" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.710 [INFO][4913] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" iface="eth0" netns="/var/run/netns/cni-4a248c32-e558-3777-d515-f014ed775bfa" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.710 [INFO][4913] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" iface="eth0" netns="/var/run/netns/cni-4a248c32-e558-3777-d515-f014ed775bfa" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.710 [INFO][4913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.710 [INFO][4913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.795 [INFO][4935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.795 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.799 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.812 [WARNING][4935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.816 [INFO][4935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.820 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:14.825043 containerd[1511]: 2025-09-09 04:03:14.822 [INFO][4913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:03:14.827463 containerd[1511]: time="2025-09-09T04:03:14.827417153Z" level=info msg="TearDown network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" successfully" Sep 9 04:03:14.827542 containerd[1511]: time="2025-09-09T04:03:14.827462675Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" returns successfully" Sep 9 04:03:14.830475 containerd[1511]: time="2025-09-09T04:03:14.828285398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6m2tz,Uid:485695d1-af74-4c84-bc1e-c3693d7e6d5c,Namespace:kube-system,Attempt:1,}" Sep 9 04:03:14.830230 systemd[1]: run-netns-cni\x2d4a248c32\x2de558\x2d3777\x2dd515\x2df014ed775bfa.mount: Deactivated successfully. Sep 9 04:03:15.088064 systemd-networkd[1432]: cali5a6e2a1f5a4: Link UP Sep 9 04:03:15.092206 systemd-networkd[1432]: cali5a6e2a1f5a4: Gained carrier Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.923 [INFO][4946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0 calico-apiserver-7d865dc46- calico-apiserver 4164c1d5-1085-4008-9d19-95f326c5d9e7 1024 0 2025-09-09 04:02:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d865dc46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com calico-apiserver-7d865dc46-shtjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a6e2a1f5a4 [] [] }} ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.924 [INFO][4946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.991 [INFO][4968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" HandleID="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.991 [INFO][4968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" HandleID="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"calico-apiserver-7d865dc46-shtjq", "timestamp":"2025-09-09 04:03:14.991497661 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.991 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.991 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:14.992 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.007 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.019 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.027 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.031 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.035 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.035 [INFO][4968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.038 [INFO][4968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.053 [INFO][4968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.069 [INFO][4968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.197/26] block=192.168.51.192/26 handle="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.070 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.197/26] handle="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.071 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:15.166879 containerd[1511]: 2025-09-09 04:03:15.071 [INFO][4968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.197/26] IPv6=[] ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" HandleID="k8s-pod-network.c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.076 [INFO][4946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4164c1d5-1085-4008-9d19-95f326c5d9e7", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7d865dc46-shtjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a6e2a1f5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.076 [INFO][4946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.197/32] ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.076 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a6e2a1f5a4 ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.097 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.099 [INFO][4946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4164c1d5-1085-4008-9d19-95f326c5d9e7", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d", Pod:"calico-apiserver-7d865dc46-shtjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a6e2a1f5a4", MAC:"9e:3b:18:98:10:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:15.172827 containerd[1511]: 2025-09-09 04:03:15.142 [INFO][4946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-shtjq" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:03:15.221698 systemd-networkd[1432]: cali2f55afb427a: Link UP Sep 9 04:03:15.222041 systemd-networkd[1432]: cali2f55afb427a: Gained carrier Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:14.965 [INFO][4956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0 coredns-7c65d6cfc9- kube-system 485695d1-af74-4c84-bc1e-c3693d7e6d5c 1025 0 2025-09-09 04:02:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com coredns-7c65d6cfc9-6m2tz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f55afb427a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:14.965 [INFO][4956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.100 [INFO][4978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" HandleID="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.102 [INFO][4978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" HandleID="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ff8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-6m2tz", "timestamp":"2025-09-09 04:03:15.100546798 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.102 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.102 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.102 [INFO][4978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.125 [INFO][4978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.148 [INFO][4978] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.156 [INFO][4978] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.164 [INFO][4978] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.174 [INFO][4978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.174 [INFO][4978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.178 [INFO][4978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5 Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.192 [INFO][4978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.209 [INFO][4978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.198/26] block=192.168.51.192/26 handle="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.210 [INFO][4978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.198/26] handle="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.210 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:15.270828 containerd[1511]: 2025-09-09 04:03:15.210 [INFO][4978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.198/26] IPv6=[] ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" HandleID="k8s-pod-network.24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.216 [INFO][4956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"485695d1-af74-4c84-bc1e-c3693d7e6d5c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-6m2tz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f55afb427a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.216 [INFO][4956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.198/32] ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.216 [INFO][4956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f55afb427a ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.223 [INFO][4956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.225 [INFO][4956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"485695d1-af74-4c84-bc1e-c3693d7e6d5c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5", Pod:"coredns-7c65d6cfc9-6m2tz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f55afb427a", MAC:"72:ee:f4:be:7a:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:15.272073 containerd[1511]: 2025-09-09 04:03:15.257 [INFO][4956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6m2tz" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:03:15.299823 systemd-networkd[1432]: calid590fdc21db: Gained IPv6LL Sep 9 04:03:15.319902 containerd[1511]: time="2025-09-09T04:03:15.319724538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:15.320169 containerd[1511]: time="2025-09-09T04:03:15.319856316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:15.323357 containerd[1511]: time="2025-09-09T04:03:15.323251143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:15.324269 containerd[1511]: time="2025-09-09T04:03:15.323974509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:15.335229 containerd[1511]: time="2025-09-09T04:03:15.335026926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:15.335704 containerd[1511]: time="2025-09-09T04:03:15.335568307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:15.335880 containerd[1511]: time="2025-09-09T04:03:15.335675098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:15.336390 containerd[1511]: time="2025-09-09T04:03:15.336175688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:15.371671 systemd[1]: Started cri-containerd-c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d.scope - libcontainer container c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d. Sep 9 04:03:15.392588 systemd[1]: Started cri-containerd-24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5.scope - libcontainer container 24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5. Sep 9 04:03:15.501762 containerd[1511]: time="2025-09-09T04:03:15.501692548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6m2tz,Uid:485695d1-af74-4c84-bc1e-c3693d7e6d5c,Namespace:kube-system,Attempt:1,} returns sandbox id \"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5\"" Sep 9 04:03:15.506195 containerd[1511]: time="2025-09-09T04:03:15.505967318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-shtjq,Uid:4164c1d5-1085-4008-9d19-95f326c5d9e7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d\"" Sep 9 04:03:15.525625 containerd[1511]: time="2025-09-09T04:03:15.525485962Z" level=info msg="CreateContainer within sandbox \"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:03:15.551130 containerd[1511]: time="2025-09-09T04:03:15.551064462Z" level=info msg="CreateContainer within sandbox \"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9c9d365d2df1fe1f65f5caa2e106ce4907f383328081fea8eef000592432b2f\"" Sep 9 04:03:15.553447 containerd[1511]: time="2025-09-09T04:03:15.552564795Z" level=info msg="StartContainer for \"f9c9d365d2df1fe1f65f5caa2e106ce4907f383328081fea8eef000592432b2f\"" Sep 9 04:03:15.604924 containerd[1511]: time="2025-09-09T04:03:15.604867408Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:03:15.633557 systemd[1]: Started cri-containerd-f9c9d365d2df1fe1f65f5caa2e106ce4907f383328081fea8eef000592432b2f.scope - libcontainer container f9c9d365d2df1fe1f65f5caa2e106ce4907f383328081fea8eef000592432b2f. Sep 9 04:03:15.695864 containerd[1511]: time="2025-09-09T04:03:15.695798923Z" level=info msg="StartContainer for \"f9c9d365d2df1fe1f65f5caa2e106ce4907f383328081fea8eef000592432b2f\" returns successfully" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.754 [INFO][5115] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.754 [INFO][5115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" iface="eth0" netns="/var/run/netns/cni-dd391d52-6c4e-7ffc-5e36-640a5c17ae87" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.755 [INFO][5115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" iface="eth0" netns="/var/run/netns/cni-dd391d52-6c4e-7ffc-5e36-640a5c17ae87" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.758 [INFO][5115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" iface="eth0" netns="/var/run/netns/cni-dd391d52-6c4e-7ffc-5e36-640a5c17ae87" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.759 [INFO][5115] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.759 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.837 [INFO][5138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.838 [INFO][5138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.838 [INFO][5138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.867 [WARNING][5138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.867 [INFO][5138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.877 [INFO][5138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:15.891148 containerd[1511]: 2025-09-09 04:03:15.885 [INFO][5115] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:03:15.894396 containerd[1511]: time="2025-09-09T04:03:15.894128569Z" level=info msg="TearDown network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" successfully" Sep 9 04:03:15.894396 containerd[1511]: time="2025-09-09T04:03:15.894168281Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" returns successfully" Sep 9 04:03:15.899982 containerd[1511]: time="2025-09-09T04:03:15.899676144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gx7p8,Uid:60ea252b-bb65-4eeb-baac-a9493773063e,Namespace:calico-system,Attempt:1,}" Sep 9 04:03:15.901283 systemd[1]: run-netns-cni\x2ddd391d52\x2d6c4e\x2d7ffc\x2d5e36\x2d640a5c17ae87.mount: Deactivated successfully. Sep 9 04:03:16.179306 systemd-networkd[1432]: calib60b69e6e05: Link UP Sep 9 04:03:16.180752 systemd-networkd[1432]: calib60b69e6e05: Gained carrier Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:15.995 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0 goldmane-7988f88666- calico-system 60ea252b-bb65-4eeb-baac-a9493773063e 1039 0 2025-09-09 04:02:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com goldmane-7988f88666-gx7p8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib60b69e6e05 [] [] }} ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:15.995 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.065 [INFO][5160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" HandleID="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.066 [INFO][5160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" HandleID="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"goldmane-7988f88666-gx7p8", "timestamp":"2025-09-09 04:03:16.065778392 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.066 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.066 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.066 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.082 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.097 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.112 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.114 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.118 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.118 [INFO][5160] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.121 [INFO][5160] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238 Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.131 [INFO][5160] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.149 [INFO][5160] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.199/26] block=192.168.51.192/26 handle="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.149 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.199/26] handle="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.149 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:16.227197 containerd[1511]: 2025-09-09 04:03:16.149 [INFO][5160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.199/26] IPv6=[] ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" HandleID="k8s-pod-network.9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.169 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"60ea252b-bb65-4eeb-baac-a9493773063e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7988f88666-gx7p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib60b69e6e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.169 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.199/32] ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.169 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib60b69e6e05 ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.180 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.184 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"60ea252b-bb65-4eeb-baac-a9493773063e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238", Pod:"goldmane-7988f88666-gx7p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib60b69e6e05", MAC:"be:a2:d7:2b:47:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:16.230766 containerd[1511]: 2025-09-09 04:03:16.216 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238" Namespace="calico-system" Pod="goldmane-7988f88666-gx7p8" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:03:16.248865 kubelet[2688]: I0909 04:03:16.248586 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6m2tz" podStartSLOduration=62.248485882 podStartE2EDuration="1m2.248485882s" podCreationTimestamp="2025-09-09 04:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:03:16.245901288 +0000 UTC m=+68.056542216" watchObservedRunningTime="2025-09-09 04:03:16.248485882 +0000 UTC m=+68.059126799" Sep 9 04:03:16.337166 containerd[1511]: time="2025-09-09T04:03:16.337022416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:16.338890 containerd[1511]: time="2025-09-09T04:03:16.337131014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:16.338890 containerd[1511]: time="2025-09-09T04:03:16.338506422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:16.347394 containerd[1511]: time="2025-09-09T04:03:16.340894253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:16.396613 systemd[1]: Started cri-containerd-9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238.scope - libcontainer container 9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238. Sep 9 04:03:16.489931 containerd[1511]: time="2025-09-09T04:03:16.489246343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gx7p8,Uid:60ea252b-bb65-4eeb-baac-a9493773063e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238\"" Sep 9 04:03:16.707907 systemd-networkd[1432]: cali5a6e2a1f5a4: Gained IPv6LL Sep 9 04:03:17.092200 systemd-networkd[1432]: cali2f55afb427a: Gained IPv6LL Sep 9 04:03:17.594043 containerd[1511]: time="2025-09-09T04:03:17.593451032Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:03:17.796845 systemd-networkd[1432]: calib60b69e6e05: Gained IPv6LL Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.759 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.759 [INFO][5235] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" iface="eth0" netns="/var/run/netns/cni-9ee0b27b-6ca0-2333-1b09-1e0b75c03b3d" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.763 [INFO][5235] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" iface="eth0" netns="/var/run/netns/cni-9ee0b27b-6ca0-2333-1b09-1e0b75c03b3d" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.763 [INFO][5235] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" iface="eth0" netns="/var/run/netns/cni-9ee0b27b-6ca0-2333-1b09-1e0b75c03b3d" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.763 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.763 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.831 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.832 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.832 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.844 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.844 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.846 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:17.864702 containerd[1511]: 2025-09-09 04:03:17.851 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:03:17.869781 systemd[1]: run-netns-cni\x2d9ee0b27b\x2d6ca0\x2d2333\x2d1b09\x2d1e0b75c03b3d.mount: Deactivated successfully. Sep 9 04:03:17.898818 containerd[1511]: time="2025-09-09T04:03:17.873424967Z" level=info msg="TearDown network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" successfully" Sep 9 04:03:17.898818 containerd[1511]: time="2025-09-09T04:03:17.897849296Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" returns successfully" Sep 9 04:03:17.900132 containerd[1511]: time="2025-09-09T04:03:17.899489595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-rrs7b,Uid:2e1ceb73-abd9-444a-9955-f6d015b27503,Namespace:calico-apiserver,Attempt:1,}" Sep 9 04:03:18.292954 systemd-networkd[1432]: cali1362df4b4ef: Link UP Sep 9 04:03:18.297803 systemd-networkd[1432]: cali1362df4b4ef: Gained carrier Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.044 [INFO][5249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0 calico-apiserver-7d865dc46- calico-apiserver 2e1ceb73-abd9-444a-9955-f6d015b27503 1058 0 2025-09-09 04:02:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d865dc46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gbnqu.gb1.brightbox.com calico-apiserver-7d865dc46-rrs7b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1362df4b4ef [] [] }} ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.044 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.153 [INFO][5262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" HandleID="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.155 [INFO][5262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" HandleID="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a3e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gbnqu.gb1.brightbox.com", "pod":"calico-apiserver-7d865dc46-rrs7b", "timestamp":"2025-09-09 04:03:18.150545999 +0000 UTC"}, Hostname:"srv-gbnqu.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.155 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.155 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.155 [INFO][5262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gbnqu.gb1.brightbox.com' Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.182 [INFO][5262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.202 [INFO][5262] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.216 [INFO][5262] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.224 [INFO][5262] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.235 [INFO][5262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.235 [INFO][5262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.241 [INFO][5262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.252 [INFO][5262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.277 [INFO][5262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.200/26] block=192.168.51.192/26 handle="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.278 [INFO][5262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.200/26] handle="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" host="srv-gbnqu.gb1.brightbox.com" Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.278 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:03:18.338900 containerd[1511]: 2025-09-09 04:03:18.278 [INFO][5262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.200/26] IPv6=[] ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" HandleID="k8s-pod-network.dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.285 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e1ceb73-abd9-444a-9955-f6d015b27503", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7d865dc46-rrs7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1362df4b4ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.286 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.200/32] ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.286 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1362df4b4ef ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.295 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.297 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e1ceb73-abd9-444a-9955-f6d015b27503", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a", Pod:"calico-apiserver-7d865dc46-rrs7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1362df4b4ef", MAC:"e2:4d:6a:ae:9f:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:03:18.342790 containerd[1511]: 2025-09-09 04:03:18.326 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a" Namespace="calico-apiserver" Pod="calico-apiserver-7d865dc46-rrs7b" WorkloadEndpoint="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:03:18.449894 containerd[1511]: time="2025-09-09T04:03:18.448554135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 04:03:18.449894 containerd[1511]: time="2025-09-09T04:03:18.448650184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 04:03:18.449894 containerd[1511]: time="2025-09-09T04:03:18.448710075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:18.449894 containerd[1511]: time="2025-09-09T04:03:18.448927683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 04:03:18.508608 systemd[1]: Started cri-containerd-dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a.scope - libcontainer container dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a. Sep 9 04:03:18.636795 containerd[1511]: time="2025-09-09T04:03:18.636706269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d865dc46-rrs7b,Uid:2e1ceb73-abd9-444a-9955-f6d015b27503,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a\"" Sep 9 04:03:18.797420 containerd[1511]: time="2025-09-09T04:03:18.795830445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:18.803135 containerd[1511]: time="2025-09-09T04:03:18.802266591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 04:03:18.812159 containerd[1511]: time="2025-09-09T04:03:18.812098418Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:18.829740 containerd[1511]: time="2025-09-09T04:03:18.829648869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:18.831899 containerd[1511]: time="2025-09-09T04:03:18.831037017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.131844409s" Sep 9 04:03:18.831899 containerd[1511]: time="2025-09-09T04:03:18.831118649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 04:03:18.835985 containerd[1511]: time="2025-09-09T04:03:18.835945167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 04:03:18.837153 containerd[1511]: time="2025-09-09T04:03:18.837084717Z" level=info msg="CreateContainer within sandbox \"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 04:03:18.871715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366758289.mount: Deactivated successfully. Sep 9 04:03:18.876905 containerd[1511]: time="2025-09-09T04:03:18.873750426Z" level=info msg="CreateContainer within sandbox \"6b677da14318601c52c1f8ffa082620d3f04c83a7ae8fcf542619ffd57a99891\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"043cc7a2e969ff4f81709e2e7f4eae7606026d067bf88d0093f30f4a70413570\"" Sep 9 04:03:18.880569 containerd[1511]: time="2025-09-09T04:03:18.878509424Z" level=info msg="StartContainer for \"043cc7a2e969ff4f81709e2e7f4eae7606026d067bf88d0093f30f4a70413570\"" Sep 9 04:03:18.979601 systemd[1]: Started cri-containerd-043cc7a2e969ff4f81709e2e7f4eae7606026d067bf88d0093f30f4a70413570.scope - libcontainer container 043cc7a2e969ff4f81709e2e7f4eae7606026d067bf88d0093f30f4a70413570. Sep 9 04:03:19.047291 containerd[1511]: time="2025-09-09T04:03:19.047036165Z" level=info msg="StartContainer for \"043cc7a2e969ff4f81709e2e7f4eae7606026d067bf88d0093f30f4a70413570\" returns successfully" Sep 9 04:03:19.281647 kubelet[2688]: I0909 04:03:19.279699 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f595bff58-228qc" podStartSLOduration=2.066588598 podStartE2EDuration="13.279664558s" podCreationTimestamp="2025-09-09 04:03:06 +0000 UTC" firstStartedPulling="2025-09-09 04:03:07.620320624 +0000 UTC m=+59.430961529" lastFinishedPulling="2025-09-09 04:03:18.833396583 +0000 UTC m=+70.644037489" observedRunningTime="2025-09-09 04:03:19.275822791 +0000 UTC m=+71.086463733" watchObservedRunningTime="2025-09-09 04:03:19.279664558 +0000 UTC m=+71.090305471" Sep 9 04:03:20.291772 systemd-networkd[1432]: cali1362df4b4ef: Gained IPv6LL Sep 9 04:03:21.271761 containerd[1511]: time="2025-09-09T04:03:21.271676218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:21.273552 containerd[1511]: time="2025-09-09T04:03:21.273482556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 04:03:21.274198 containerd[1511]: time="2025-09-09T04:03:21.274163736Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:21.402380 containerd[1511]: time="2025-09-09T04:03:21.401810376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.565791823s" Sep 9 04:03:21.402380 containerd[1511]: time="2025-09-09T04:03:21.401900287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 04:03:21.404524 containerd[1511]: time="2025-09-09T04:03:21.404470380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:03:21.405870 containerd[1511]: time="2025-09-09T04:03:21.405590889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:21.411791 containerd[1511]: time="2025-09-09T04:03:21.411510614Z" level=info msg="CreateContainer within sandbox \"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 04:03:21.443622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894033229.mount: Deactivated successfully. Sep 9 04:03:21.458389 containerd[1511]: time="2025-09-09T04:03:21.456571929Z" level=info msg="CreateContainer within sandbox \"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1a71a58b55f2daa5046d9085cb39bc463d09db623c1650e4d2ea375218caadac\"" Sep 9 04:03:21.460925 containerd[1511]: time="2025-09-09T04:03:21.460890117Z" level=info msg="StartContainer for \"1a71a58b55f2daa5046d9085cb39bc463d09db623c1650e4d2ea375218caadac\"" Sep 9 04:03:21.530581 systemd[1]: Started cri-containerd-1a71a58b55f2daa5046d9085cb39bc463d09db623c1650e4d2ea375218caadac.scope - libcontainer container 1a71a58b55f2daa5046d9085cb39bc463d09db623c1650e4d2ea375218caadac. Sep 9 04:03:21.583904 containerd[1511]: time="2025-09-09T04:03:21.583838065Z" level=info msg="StartContainer for \"1a71a58b55f2daa5046d9085cb39bc463d09db623c1650e4d2ea375218caadac\" returns successfully" Sep 9 04:03:22.142234 update_engine[1489]: I20250909 04:03:22.141520 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 04:03:22.142951 update_engine[1489]: I20250909 04:03:22.142519 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 04:03:22.143152 update_engine[1489]: I20250909 04:03:22.143083 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 04:03:22.143787 update_engine[1489]: E20250909 04:03:22.143733 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 04:03:22.143880 update_engine[1489]: I20250909 04:03:22.143836 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 04:03:24.674991 systemd[1]: Started sshd@9-10.230.58.214:22-147.75.109.163:48554.service - OpenSSH per-connection server daemon (147.75.109.163:48554). Sep 9 04:03:25.673970 sshd[5437]: Accepted publickey for core from 147.75.109.163 port 48554 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:25.683831 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:25.705519 systemd-logind[1485]: New session 12 of user core. Sep 9 04:03:25.712630 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 04:03:27.029425 containerd[1511]: time="2025-09-09T04:03:27.028636892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:27.040975 containerd[1511]: time="2025-09-09T04:03:27.040110841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 04:03:27.046394 containerd[1511]: time="2025-09-09T04:03:27.045749649Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:27.062990 containerd[1511]: time="2025-09-09T04:03:27.062919267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:27.065661 containerd[1511]: time="2025-09-09T04:03:27.065615273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.660909348s" Sep 9 04:03:27.066064 containerd[1511]: time="2025-09-09T04:03:27.065810587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 04:03:27.129434 containerd[1511]: time="2025-09-09T04:03:27.128979616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 04:03:27.163738 containerd[1511]: time="2025-09-09T04:03:27.163563265Z" level=info msg="CreateContainer within sandbox \"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:03:27.197404 containerd[1511]: time="2025-09-09T04:03:27.196165172Z" level=info msg="CreateContainer within sandbox \"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83e628943413954e1c638a35c0dc081f8ed8309ac9c8633ddd4dc2781477b227\"" Sep 9 04:03:27.210836 containerd[1511]: time="2025-09-09T04:03:27.210225700Z" level=info msg="StartContainer for \"83e628943413954e1c638a35c0dc081f8ed8309ac9c8633ddd4dc2781477b227\"" Sep 9 04:03:27.272241 sshd[5437]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:27.286246 systemd[1]: sshd@9-10.230.58.214:22-147.75.109.163:48554.service: Deactivated successfully. Sep 9 04:03:27.297701 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 04:03:27.300727 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Sep 9 04:03:27.304620 systemd-logind[1485]: Removed session 12. Sep 9 04:03:27.333623 systemd[1]: Started cri-containerd-83e628943413954e1c638a35c0dc081f8ed8309ac9c8633ddd4dc2781477b227.scope - libcontainer container 83e628943413954e1c638a35c0dc081f8ed8309ac9c8633ddd4dc2781477b227. Sep 9 04:03:27.435406 containerd[1511]: time="2025-09-09T04:03:27.435252559Z" level=info msg="StartContainer for \"83e628943413954e1c638a35c0dc081f8ed8309ac9c8633ddd4dc2781477b227\" returns successfully" Sep 9 04:03:29.424803 kubelet[2688]: I0909 04:03:29.421663 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:03:29.919854 kubelet[2688]: I0909 04:03:29.900093 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d865dc46-shtjq" podStartSLOduration=52.281356918 podStartE2EDuration="1m3.900018241s" podCreationTimestamp="2025-09-09 04:02:26 +0000 UTC" firstStartedPulling="2025-09-09 04:03:15.509916588 +0000 UTC m=+67.320557490" lastFinishedPulling="2025-09-09 04:03:27.128577901 +0000 UTC m=+78.939218813" observedRunningTime="2025-09-09 04:03:28.536970664 +0000 UTC m=+80.347611579" watchObservedRunningTime="2025-09-09 04:03:29.900018241 +0000 UTC m=+81.710659141" Sep 9 04:03:31.114615 kubelet[2688]: I0909 04:03:31.114424 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:03:32.146396 update_engine[1489]: I20250909 04:03:32.145641 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 04:03:32.151029 update_engine[1489]: I20250909 04:03:32.147889 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 04:03:32.151029 update_engine[1489]: I20250909 04:03:32.149415 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 04:03:32.152562 update_engine[1489]: E20250909 04:03:32.151584 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 04:03:32.152562 update_engine[1489]: I20250909 04:03:32.152446 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 9 04:03:32.528532 systemd[1]: Started sshd@10-10.230.58.214:22-147.75.109.163:38274.service - OpenSSH per-connection server daemon (147.75.109.163:38274). Sep 9 04:03:33.564576 sshd[5543]: Accepted publickey for core from 147.75.109.163 port 38274 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:33.570484 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:33.588829 systemd-logind[1485]: New session 13 of user core. Sep 9 04:03:33.597958 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 04:03:35.314287 sshd[5543]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:35.333414 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Sep 9 04:03:35.335080 systemd[1]: sshd@10-10.230.58.214:22-147.75.109.163:38274.service: Deactivated successfully. Sep 9 04:03:35.343153 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 04:03:35.361561 systemd-logind[1485]: Removed session 13. Sep 9 04:03:36.645200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380142534.mount: Deactivated successfully. Sep 9 04:03:38.036988 containerd[1511]: time="2025-09-09T04:03:38.036836501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:38.051307 containerd[1511]: time="2025-09-09T04:03:38.051000735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 04:03:38.074923 containerd[1511]: time="2025-09-09T04:03:38.074652038Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:38.087155 containerd[1511]: time="2025-09-09T04:03:38.087044340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:38.093282 containerd[1511]: time="2025-09-09T04:03:38.093213574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 10.959825209s" Sep 9 04:03:38.093757 containerd[1511]: time="2025-09-09T04:03:38.093480170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 04:03:38.153873 containerd[1511]: time="2025-09-09T04:03:38.153819668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:03:38.191740 containerd[1511]: time="2025-09-09T04:03:38.191525725Z" level=info msg="CreateContainer within sandbox \"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 04:03:38.271060 containerd[1511]: time="2025-09-09T04:03:38.270858297Z" level=info msg="CreateContainer within sandbox \"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a79a318511dceef4d7dbc2048cc7b6fab2205c833eb8f2d8174fc36de5dab48d\"" Sep 9 04:03:38.273668 containerd[1511]: time="2025-09-09T04:03:38.273613164Z" level=info msg="StartContainer for \"a79a318511dceef4d7dbc2048cc7b6fab2205c833eb8f2d8174fc36de5dab48d\"" Sep 9 04:03:38.513261 systemd[1]: Started cri-containerd-a79a318511dceef4d7dbc2048cc7b6fab2205c833eb8f2d8174fc36de5dab48d.scope - libcontainer container a79a318511dceef4d7dbc2048cc7b6fab2205c833eb8f2d8174fc36de5dab48d. Sep 9 04:03:38.649554 containerd[1511]: time="2025-09-09T04:03:38.648969203Z" level=info msg="StartContainer for \"a79a318511dceef4d7dbc2048cc7b6fab2205c833eb8f2d8174fc36de5dab48d\" returns successfully" Sep 9 04:03:38.698350 containerd[1511]: time="2025-09-09T04:03:38.698263326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 04:03:38.704277 containerd[1511]: time="2025-09-09T04:03:38.703008319Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:38.854455 containerd[1511]: time="2025-09-09T04:03:38.854276998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 700.390262ms" Sep 9 04:03:38.855206 containerd[1511]: time="2025-09-09T04:03:38.854490473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 04:03:38.858415 containerd[1511]: time="2025-09-09T04:03:38.857894657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 04:03:38.872286 containerd[1511]: time="2025-09-09T04:03:38.872112999Z" level=info msg="CreateContainer within sandbox \"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:03:38.909723 containerd[1511]: time="2025-09-09T04:03:38.909472368Z" level=info msg="CreateContainer within sandbox \"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d61ee06abf4167c856a5f6c5088855f7aa66538676ef0d30fbca2d1a1a47da83\"" Sep 9 04:03:38.913867 containerd[1511]: time="2025-09-09T04:03:38.911949599Z" level=info msg="StartContainer for \"d61ee06abf4167c856a5f6c5088855f7aa66538676ef0d30fbca2d1a1a47da83\"" Sep 9 04:03:38.986602 systemd[1]: Started cri-containerd-d61ee06abf4167c856a5f6c5088855f7aa66538676ef0d30fbca2d1a1a47da83.scope - libcontainer container d61ee06abf4167c856a5f6c5088855f7aa66538676ef0d30fbca2d1a1a47da83. Sep 9 04:03:39.068654 containerd[1511]: time="2025-09-09T04:03:39.068520852Z" level=info msg="StartContainer for \"d61ee06abf4167c856a5f6c5088855f7aa66538676ef0d30fbca2d1a1a47da83\" returns successfully" Sep 9 04:03:40.055302 kubelet[2688]: I0909 04:03:40.054693 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gx7p8" podStartSLOduration=47.3719036 podStartE2EDuration="1m9.031108322s" podCreationTimestamp="2025-09-09 04:02:31 +0000 UTC" firstStartedPulling="2025-09-09 04:03:16.492342551 +0000 UTC m=+68.302983458" lastFinishedPulling="2025-09-09 04:03:38.151547246 +0000 UTC m=+89.962188180" observedRunningTime="2025-09-09 04:03:39.987611207 +0000 UTC m=+91.798252164" watchObservedRunningTime="2025-09-09 04:03:40.031108322 +0000 UTC m=+91.841749230" Sep 9 04:03:40.492136 systemd[1]: Started sshd@11-10.230.58.214:22-147.75.109.163:45652.service - OpenSSH per-connection server daemon (147.75.109.163:45652). Sep 9 04:03:40.681923 kubelet[2688]: I0909 04:03:40.681874 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:03:41.600592 sshd[5694]: Accepted publickey for core from 147.75.109.163 port 45652 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:41.604454 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:41.620733 systemd-logind[1485]: New session 14 of user core. Sep 9 04:03:41.629653 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 04:03:42.142227 update_engine[1489]: I20250909 04:03:42.141676 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 04:03:42.148138 update_engine[1489]: I20250909 04:03:42.142777 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 04:03:42.148138 update_engine[1489]: I20250909 04:03:42.145180 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 04:03:42.148138 update_engine[1489]: E20250909 04:03:42.146696 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 04:03:42.148138 update_engine[1489]: I20250909 04:03:42.146780 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 04:03:42.154953 update_engine[1489]: I20250909 04:03:42.152242 1489 omaha_request_action.cc:617] Omaha request response: Sep 9 04:03:42.154953 update_engine[1489]: E20250909 04:03:42.154881 1489 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 9 04:03:42.200776 kubelet[2688]: I0909 04:03:42.198338 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:03:42.247853 update_engine[1489]: I20250909 04:03:42.247596 1489 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 9 04:03:42.247853 update_engine[1489]: I20250909 04:03:42.247658 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 04:03:42.247853 update_engine[1489]: I20250909 04:03:42.247673 1489 update_attempter.cc:306] Processing Done. Sep 9 04:03:42.251142 update_engine[1489]: E20250909 04:03:42.250437 1489 update_attempter.cc:619] Update failed. Sep 9 04:03:42.252714 update_engine[1489]: I20250909 04:03:42.251271 1489 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 9 04:03:42.252714 update_engine[1489]: I20250909 04:03:42.251297 1489 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 9 04:03:42.252714 update_engine[1489]: I20250909 04:03:42.251312 1489 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 9 04:03:42.252714 update_engine[1489]: I20250909 04:03:42.252380 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 04:03:42.255263 update_engine[1489]: I20250909 04:03:42.254309 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 04:03:42.255263 update_engine[1489]: I20250909 04:03:42.254343 1489 omaha_request_action.cc:272] Request: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: Sep 9 04:03:42.255263 update_engine[1489]: I20250909 04:03:42.254374 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 04:03:42.255263 update_engine[1489]: I20250909 04:03:42.254707 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 04:03:42.255263 update_engine[1489]: I20250909 04:03:42.254988 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 04:03:42.261287 update_engine[1489]: E20250909 04:03:42.258291 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258394 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258416 1489 omaha_request_action.cc:617] Omaha request response: Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258430 1489 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258442 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258500 1489 update_attempter.cc:306] Processing Done. Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258517 1489 update_attempter.cc:310] Error event sent. Sep 9 04:03:42.261287 update_engine[1489]: I20250909 04:03:42.258580 1489 update_check_scheduler.cc:74] Next update check in 42m52s Sep 9 04:03:42.332275 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 9 04:03:42.332275 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 9 04:03:42.404241 kubelet[2688]: I0909 04:03:42.403314 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d865dc46-rrs7b" podStartSLOduration=56.186602709 podStartE2EDuration="1m16.403242904s" podCreationTimestamp="2025-09-09 04:02:26 +0000 UTC" firstStartedPulling="2025-09-09 04:03:18.639759639 +0000 UTC m=+70.450400545" lastFinishedPulling="2025-09-09 04:03:38.856399814 +0000 UTC m=+90.667040740" observedRunningTime="2025-09-09 04:03:40.093808169 +0000 UTC m=+91.904449088" watchObservedRunningTime="2025-09-09 04:03:42.403242904 +0000 UTC m=+94.213883821" Sep 9 04:03:43.218581 containerd[1511]: time="2025-09-09T04:03:43.218397646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:43.236607 sshd[5694]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:43.251179 containerd[1511]: time="2025-09-09T04:03:43.218933704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 04:03:43.282428 containerd[1511]: time="2025-09-09T04:03:43.276248242Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:43.283085 systemd[1]: sshd@11-10.230.58.214:22-147.75.109.163:45652.service: Deactivated successfully. Sep 9 04:03:43.289653 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 04:03:43.292524 containerd[1511]: time="2025-09-09T04:03:43.291409610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:03:43.291643 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Sep 9 04:03:43.294287 systemd-logind[1485]: Removed session 14. Sep 9 04:03:43.295748 containerd[1511]: time="2025-09-09T04:03:43.293493564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.435422981s" Sep 9 04:03:43.298761 containerd[1511]: time="2025-09-09T04:03:43.298666290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 04:03:43.314666 containerd[1511]: time="2025-09-09T04:03:43.314609065Z" level=info msg="CreateContainer within sandbox \"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 04:03:43.423574 systemd[1]: Started sshd@12-10.230.58.214:22-147.75.109.163:45668.service - OpenSSH per-connection server daemon (147.75.109.163:45668). Sep 9 04:03:43.465536 containerd[1511]: time="2025-09-09T04:03:43.463868794Z" level=info msg="CreateContainer within sandbox \"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615\"" Sep 9 04:03:43.470595 containerd[1511]: time="2025-09-09T04:03:43.470112471Z" level=info msg="StartContainer for \"170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615\"" Sep 9 04:03:43.629207 systemd[1]: run-containerd-runc-k8s.io-170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615-runc.UKjTbS.mount: Deactivated successfully. Sep 9 04:03:43.642816 systemd[1]: Started cri-containerd-170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615.scope - libcontainer container 170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615. Sep 9 04:03:43.744422 containerd[1511]: time="2025-09-09T04:03:43.744250569Z" level=info msg="StartContainer for \"170289a1a4c35a20f757dcdca6334245ffe72501dc540bad8f9d9f1d0e59c615\" returns successfully" Sep 9 04:03:44.418516 sshd[5760]: Accepted publickey for core from 147.75.109.163 port 45668 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:44.428105 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:44.440585 systemd-logind[1485]: New session 15 of user core. Sep 9 04:03:44.445623 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 04:03:45.206117 kubelet[2688]: I0909 04:03:45.201230 2688 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 04:03:45.206117 kubelet[2688]: I0909 04:03:45.205510 2688 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 04:03:45.722978 sshd[5760]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:45.733887 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Sep 9 04:03:45.734105 systemd[1]: sshd@12-10.230.58.214:22-147.75.109.163:45668.service: Deactivated successfully. Sep 9 04:03:45.738070 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 04:03:45.742058 systemd-logind[1485]: Removed session 15. Sep 9 04:03:45.879718 systemd[1]: Started sshd@13-10.230.58.214:22-147.75.109.163:45678.service - OpenSSH per-connection server daemon (147.75.109.163:45678). Sep 9 04:03:46.878681 sshd[5812]: Accepted publickey for core from 147.75.109.163 port 45678 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:46.881948 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:46.892928 systemd-logind[1485]: New session 16 of user core. Sep 9 04:03:46.902712 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 04:03:47.712771 sshd[5812]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:47.718937 systemd[1]: sshd@13-10.230.58.214:22-147.75.109.163:45678.service: Deactivated successfully. Sep 9 04:03:47.722764 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 04:03:47.724444 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Sep 9 04:03:47.726076 systemd-logind[1485]: Removed session 16. Sep 9 04:03:49.112226 kubelet[2688]: I0909 04:03:49.110318 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-79jrx" podStartSLOduration=48.098983127 podStartE2EDuration="1m17.110247179s" podCreationTimestamp="2025-09-09 04:02:32 +0000 UTC" firstStartedPulling="2025-09-09 04:03:14.291170163 +0000 UTC m=+66.101811068" lastFinishedPulling="2025-09-09 04:03:43.302434207 +0000 UTC m=+95.113075120" observedRunningTime="2025-09-09 04:03:44.91989673 +0000 UTC m=+96.730537637" watchObservedRunningTime="2025-09-09 04:03:49.110247179 +0000 UTC m=+100.920888093" Sep 9 04:03:52.873822 systemd[1]: Started sshd@14-10.230.58.214:22-147.75.109.163:37754.service - OpenSSH per-connection server daemon (147.75.109.163:37754). Sep 9 04:03:53.902441 sshd[5876]: Accepted publickey for core from 147.75.109.163 port 37754 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:03:53.906249 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:03:53.921763 systemd-logind[1485]: New session 17 of user core. Sep 9 04:03:53.929774 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 04:03:54.826683 sshd[5876]: pam_unix(sshd:session): session closed for user core Sep 9 04:03:54.834572 systemd[1]: sshd@14-10.230.58.214:22-147.75.109.163:37754.service: Deactivated successfully. Sep 9 04:03:54.839337 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 04:03:54.840643 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Sep 9 04:03:54.842938 systemd-logind[1485]: Removed session 17. Sep 9 04:03:59.993760 systemd[1]: Started sshd@15-10.230.58.214:22-147.75.109.163:37766.service - OpenSSH per-connection server daemon (147.75.109.163:37766). Sep 9 04:04:00.950560 sshd[5913]: Accepted publickey for core from 147.75.109.163 port 37766 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:00.954132 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:00.965337 systemd-logind[1485]: New session 18 of user core. Sep 9 04:04:00.973766 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 04:04:02.008103 sshd[5913]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:02.014673 systemd[1]: sshd@15-10.230.58.214:22-147.75.109.163:37766.service: Deactivated successfully. Sep 9 04:04:02.019339 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 04:04:02.020474 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Sep 9 04:04:02.023461 systemd-logind[1485]: Removed session 18. Sep 9 04:04:07.175860 systemd[1]: Started sshd@16-10.230.58.214:22-147.75.109.163:43210.service - OpenSSH per-connection server daemon (147.75.109.163:43210). Sep 9 04:04:08.102257 sshd[5925]: Accepted publickey for core from 147.75.109.163 port 43210 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:08.105515 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:08.114108 systemd-logind[1485]: New session 19 of user core. Sep 9 04:04:08.122643 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 04:04:09.002249 sshd[5925]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:09.009995 systemd[1]: sshd@16-10.230.58.214:22-147.75.109.163:43210.service: Deactivated successfully. Sep 9 04:04:09.013984 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 04:04:09.015802 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Sep 9 04:04:09.018191 systemd-logind[1485]: Removed session 19. Sep 9 04:04:10.123578 containerd[1511]: time="2025-09-09T04:04:10.123416454Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.466 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f3923f8-1ebf-4579-9a05-a6111fc5a148", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad", Pod:"csi-node-driver-79jrx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid590fdc21db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.470 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.470 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" iface="eth0" netns="" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.470 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.470 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.714 [INFO][5955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.718 [INFO][5955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.718 [INFO][5955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.741 [WARNING][5955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.741 [INFO][5955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.743 [INFO][5955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:10.750404 containerd[1511]: 2025-09-09 04:04:10.746 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.758848 containerd[1511]: time="2025-09-09T04:04:10.758791764Z" level=info msg="TearDown network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" successfully" Sep 9 04:04:10.759016 containerd[1511]: time="2025-09-09T04:04:10.758986863Z" level=info msg="StopPodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" returns successfully" Sep 9 04:04:10.770907 containerd[1511]: time="2025-09-09T04:04:10.770852352Z" level=info msg="RemovePodSandbox for \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:04:10.819202 containerd[1511]: time="2025-09-09T04:04:10.818973987Z" level=info msg="Forcibly stopping sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\"" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.888 [WARNING][5969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f3923f8-1ebf-4579-9a05-a6111fc5a148", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"953639c64c4be6296ee7df4dc6af2f2107b680ed6318e6a33f4dd411a4b661ad", Pod:"csi-node-driver-79jrx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid590fdc21db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.888 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.888 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" iface="eth0" netns="" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.888 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.888 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.931 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.931 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.931 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.941 [WARNING][5977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.941 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" HandleID="k8s-pod-network.6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Workload="srv--gbnqu.gb1.brightbox.com-k8s-csi--node--driver--79jrx-eth0" Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.944 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:10.954329 containerd[1511]: 2025-09-09 04:04:10.950 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6" Sep 9 04:04:10.956467 containerd[1511]: time="2025-09-09T04:04:10.954852382Z" level=info msg="TearDown network for sandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" successfully" Sep 9 04:04:11.043226 containerd[1511]: time="2025-09-09T04:04:11.042970713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:04:11.043226 containerd[1511]: time="2025-09-09T04:04:11.043131188Z" level=info msg="RemovePodSandbox \"6817957e00aececb30c317f909557b7139bafacecf0039e64f7f1149da49dfb6\" returns successfully" Sep 9 04:04:11.045385 containerd[1511]: time="2025-09-09T04:04:11.044041869Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.115 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"60ea252b-bb65-4eeb-baac-a9493773063e", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238", Pod:"goldmane-7988f88666-gx7p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib60b69e6e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.116 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.116 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" iface="eth0" netns="" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.116 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.116 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.151 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.151 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.151 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.162 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.162 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.165 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.171763 containerd[1511]: 2025-09-09 04:04:11.168 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.175254 containerd[1511]: time="2025-09-09T04:04:11.171856141Z" level=info msg="TearDown network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" successfully" Sep 9 04:04:11.175254 containerd[1511]: time="2025-09-09T04:04:11.171945374Z" level=info msg="StopPodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" returns successfully" Sep 9 04:04:11.175254 containerd[1511]: time="2025-09-09T04:04:11.173636934Z" level=info msg="RemovePodSandbox for \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:04:11.175254 containerd[1511]: time="2025-09-09T04:04:11.173690006Z" level=info msg="Forcibly stopping sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\"" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.238 [WARNING][6012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"60ea252b-bb65-4eeb-baac-a9493773063e", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"9ce707e0d6e5a16c8a3c7d185c1b36f7b2f261bd2df3bc6333d9f142febc1238", Pod:"goldmane-7988f88666-gx7p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib60b69e6e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.239 [INFO][6012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.239 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" iface="eth0" netns="" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.239 [INFO][6012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.239 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.283 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.283 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.283 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.293 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.293 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" HandleID="k8s-pod-network.f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-goldmane--7988f88666--gx7p8-eth0" Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.295 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.300444 containerd[1511]: 2025-09-09 04:04:11.297 [INFO][6012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d" Sep 9 04:04:11.300444 containerd[1511]: time="2025-09-09T04:04:11.299977610Z" level=info msg="TearDown network for sandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" successfully" Sep 9 04:04:11.326232 containerd[1511]: time="2025-09-09T04:04:11.326138481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:04:11.326696 containerd[1511]: time="2025-09-09T04:04:11.326313524Z" level=info msg="RemovePodSandbox \"f05c8f497e728f37ef8637447323edf2919da1aa43f94f9996b10025d9dc474d\" returns successfully" Sep 9 04:04:11.327204 containerd[1511]: time="2025-09-09T04:04:11.327160101Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.380 [WARNING][6033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"485695d1-af74-4c84-bc1e-c3693d7e6d5c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5", Pod:"coredns-7c65d6cfc9-6m2tz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f55afb427a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.380 [INFO][6033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.381 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" iface="eth0" netns="" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.381 [INFO][6033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.381 [INFO][6033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.422 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.422 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.422 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.437 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.437 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.440 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.448891 containerd[1511]: 2025-09-09 04:04:11.444 [INFO][6033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.448891 containerd[1511]: time="2025-09-09T04:04:11.447836631Z" level=info msg="TearDown network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" successfully" Sep 9 04:04:11.448891 containerd[1511]: time="2025-09-09T04:04:11.447890117Z" level=info msg="StopPodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" returns successfully" Sep 9 04:04:11.453080 containerd[1511]: time="2025-09-09T04:04:11.450552833Z" level=info msg="RemovePodSandbox for \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:04:11.453080 containerd[1511]: time="2025-09-09T04:04:11.450589053Z" level=info msg="Forcibly stopping sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\"" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.522 [WARNING][6054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"485695d1-af74-4c84-bc1e-c3693d7e6d5c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"24432677eafa275567e331e32c3fc9ef87f2d2e43339316fafd251fbbea93bf5", Pod:"coredns-7c65d6cfc9-6m2tz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f55afb427a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.523 [INFO][6054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.523 [INFO][6054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" iface="eth0" netns="" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.523 [INFO][6054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.523 [INFO][6054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.557 [INFO][6061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.557 [INFO][6061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.557 [INFO][6061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.568 [WARNING][6061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.568 [INFO][6061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" HandleID="k8s-pod-network.031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Workload="srv--gbnqu.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--6m2tz-eth0" Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.574 [INFO][6061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.581436 containerd[1511]: 2025-09-09 04:04:11.577 [INFO][6054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d" Sep 9 04:04:11.581436 containerd[1511]: time="2025-09-09T04:04:11.580715804Z" level=info msg="TearDown network for sandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" successfully" Sep 9 04:04:11.628096 containerd[1511]: time="2025-09-09T04:04:11.627789123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:04:11.628096 containerd[1511]: time="2025-09-09T04:04:11.627949519Z" level=info msg="RemovePodSandbox \"031e35c5be7d2096f88b1bf948dcaffa5070a0df91ae8b89f0ab710afeb22b2d\" returns successfully" Sep 9 04:04:11.629826 containerd[1511]: time="2025-09-09T04:04:11.629777549Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.708 [WARNING][6075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4164c1d5-1085-4008-9d19-95f326c5d9e7", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d", Pod:"calico-apiserver-7d865dc46-shtjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a6e2a1f5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.709 [INFO][6075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.709 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" iface="eth0" netns="" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.709 [INFO][6075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.709 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.748 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.749 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.749 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.759 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.759 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.761 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.768656 containerd[1511]: 2025-09-09 04:04:11.765 [INFO][6075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.771503 containerd[1511]: time="2025-09-09T04:04:11.768723427Z" level=info msg="TearDown network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" successfully" Sep 9 04:04:11.771503 containerd[1511]: time="2025-09-09T04:04:11.768770898Z" level=info msg="StopPodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" returns successfully" Sep 9 04:04:11.771503 containerd[1511]: time="2025-09-09T04:04:11.769997728Z" level=info msg="RemovePodSandbox for \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:04:11.771503 containerd[1511]: time="2025-09-09T04:04:11.770056820Z" level=info msg="Forcibly stopping sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\"" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.822 [WARNING][6097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4164c1d5-1085-4008-9d19-95f326c5d9e7", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"c911bce465eb69c39312fcca37d0905ebd46918f1cd3f722f6fc9899f6d6374d", Pod:"calico-apiserver-7d865dc46-shtjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a6e2a1f5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.823 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.823 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" iface="eth0" netns="" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.824 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.824 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.868 [INFO][6104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.868 [INFO][6104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.868 [INFO][6104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.877 [WARNING][6104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.877 [INFO][6104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" HandleID="k8s-pod-network.d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--shtjq-eth0" Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.879 [INFO][6104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:11.885427 containerd[1511]: 2025-09-09 04:04:11.882 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821" Sep 9 04:04:11.885427 containerd[1511]: time="2025-09-09T04:04:11.884779331Z" level=info msg="TearDown network for sandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" successfully" Sep 9 04:04:11.892780 containerd[1511]: time="2025-09-09T04:04:11.892742053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:04:11.893001 containerd[1511]: time="2025-09-09T04:04:11.892969853Z" level=info msg="RemovePodSandbox \"d10421baba18dbc1345b59c1498b06397779a6c40ced699ffa660c1805237821\" returns successfully" Sep 9 04:04:11.893927 containerd[1511]: time="2025-09-09T04:04:11.893881840Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.947 [WARNING][6118] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e1ceb73-abd9-444a-9955-f6d015b27503", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a", Pod:"calico-apiserver-7d865dc46-rrs7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1362df4b4ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.948 [INFO][6118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.948 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" iface="eth0" netns="" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.948 [INFO][6118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.948 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.996 [INFO][6125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.999 [INFO][6125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:11.999 [INFO][6125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:12.013 [WARNING][6125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:12.013 [INFO][6125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:12.015 [INFO][6125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:12.020753 containerd[1511]: 2025-09-09 04:04:12.018 [INFO][6118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.021628 containerd[1511]: time="2025-09-09T04:04:12.020825218Z" level=info msg="TearDown network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" successfully" Sep 9 04:04:12.021628 containerd[1511]: time="2025-09-09T04:04:12.020872883Z" level=info msg="StopPodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" returns successfully" Sep 9 04:04:12.021854 containerd[1511]: time="2025-09-09T04:04:12.021633937Z" level=info msg="RemovePodSandbox for \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:04:12.021854 containerd[1511]: time="2025-09-09T04:04:12.021761674Z" level=info msg="Forcibly stopping sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\"" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.082 [WARNING][6139] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0", GenerateName:"calico-apiserver-7d865dc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e1ceb73-abd9-444a-9955-f6d015b27503", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d865dc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gbnqu.gb1.brightbox.com", ContainerID:"dd76998eec6930536ba813d2a9837d8203efdfc250ad74c017e7ccbc2986c94a", Pod:"calico-apiserver-7d865dc46-rrs7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1362df4b4ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.083 [INFO][6139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.083 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" iface="eth0" netns="" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.083 [INFO][6139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.083 [INFO][6139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.118 [INFO][6146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.118 [INFO][6146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.118 [INFO][6146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.128 [WARNING][6146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.128 [INFO][6146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" HandleID="k8s-pod-network.c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Workload="srv--gbnqu.gb1.brightbox.com-k8s-calico--apiserver--7d865dc46--rrs7b-eth0" Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.131 [INFO][6146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:04:12.136247 containerd[1511]: 2025-09-09 04:04:12.133 [INFO][6139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a" Sep 9 04:04:12.136247 containerd[1511]: time="2025-09-09T04:04:12.136159918Z" level=info msg="TearDown network for sandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" successfully" Sep 9 04:04:12.141688 containerd[1511]: time="2025-09-09T04:04:12.141596593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 04:04:12.141843 containerd[1511]: time="2025-09-09T04:04:12.141755423Z" level=info msg="RemovePodSandbox \"c7328708c29fff9d8a658be8feda3291f006141457b392f557199bd0f452ab4a\" returns successfully" Sep 9 04:04:14.194239 systemd[1]: Started sshd@17-10.230.58.214:22-147.75.109.163:32836.service - OpenSSH per-connection server daemon (147.75.109.163:32836). Sep 9 04:04:15.191875 sshd[6153]: Accepted publickey for core from 147.75.109.163 port 32836 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:15.194990 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:15.204113 systemd-logind[1485]: New session 20 of user core. Sep 9 04:04:15.211677 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 04:04:16.512893 sshd[6153]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:16.521953 systemd[1]: sshd@17-10.230.58.214:22-147.75.109.163:32836.service: Deactivated successfully. Sep 9 04:04:16.525743 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 04:04:16.528429 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Sep 9 04:04:16.530220 systemd-logind[1485]: Removed session 20. Sep 9 04:04:16.672436 systemd[1]: Started sshd@18-10.230.58.214:22-147.75.109.163:32838.service - OpenSSH per-connection server daemon (147.75.109.163:32838). Sep 9 04:04:17.595483 sshd[6171]: Accepted publickey for core from 147.75.109.163 port 32838 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:17.599317 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:17.609521 systemd-logind[1485]: New session 21 of user core. Sep 9 04:04:17.614621 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 04:04:18.710820 sshd[6171]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:18.716811 systemd[1]: sshd@18-10.230.58.214:22-147.75.109.163:32838.service: Deactivated successfully. Sep 9 04:04:18.720659 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 04:04:18.722844 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Sep 9 04:04:18.724895 systemd-logind[1485]: Removed session 21. Sep 9 04:04:18.877224 systemd[1]: Started sshd@19-10.230.58.214:22-147.75.109.163:32840.service - OpenSSH per-connection server daemon (147.75.109.163:32840). Sep 9 04:04:19.060039 systemd[1]: run-containerd-runc-k8s.io-4c42943bd89c01ff70b5f0135c328deb471690106d5d5480f7789a6e73ff0c90-runc.yAeAVq.mount: Deactivated successfully. Sep 9 04:04:19.822887 sshd[6182]: Accepted publickey for core from 147.75.109.163 port 32840 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:19.829105 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:19.841018 systemd-logind[1485]: New session 22 of user core. Sep 9 04:04:19.846603 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 04:04:24.613088 sshd[6182]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:24.672513 systemd[1]: sshd@19-10.230.58.214:22-147.75.109.163:32840.service: Deactivated successfully. Sep 9 04:04:24.691841 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 04:04:24.692632 systemd[1]: session-22.scope: Consumed 1.412s CPU time. Sep 9 04:04:24.698584 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Sep 9 04:04:24.705818 systemd-logind[1485]: Removed session 22. Sep 9 04:04:24.812101 systemd[1]: Started sshd@20-10.230.58.214:22-147.75.109.163:53822.service - OpenSSH per-connection server daemon (147.75.109.163:53822). Sep 9 04:04:25.842583 sshd[6259]: Accepted publickey for core from 147.75.109.163 port 53822 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:25.847621 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:25.861168 systemd-logind[1485]: New session 23 of user core. Sep 9 04:04:25.869426 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 04:04:27.447335 sshd[6259]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:27.457574 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Sep 9 04:04:27.458662 systemd[1]: sshd@20-10.230.58.214:22-147.75.109.163:53822.service: Deactivated successfully. Sep 9 04:04:27.464963 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 04:04:27.466781 systemd-logind[1485]: Removed session 23. Sep 9 04:04:27.610887 systemd[1]: Started sshd@21-10.230.58.214:22-147.75.109.163:53832.service - OpenSSH per-connection server daemon (147.75.109.163:53832). Sep 9 04:04:28.594554 sshd[6273]: Accepted publickey for core from 147.75.109.163 port 53832 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:28.597245 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:28.621887 systemd-logind[1485]: New session 24 of user core. Sep 9 04:04:28.633595 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 04:04:29.718006 sshd[6273]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:29.732350 systemd[1]: sshd@21-10.230.58.214:22-147.75.109.163:53832.service: Deactivated successfully. Sep 9 04:04:29.740118 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 04:04:29.745630 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Sep 9 04:04:29.749285 systemd-logind[1485]: Removed session 24. Sep 9 04:04:34.930638 systemd[1]: Started sshd@22-10.230.58.214:22-147.75.109.163:44188.service - OpenSSH per-connection server daemon (147.75.109.163:44188). Sep 9 04:04:35.933494 sshd[6339]: Accepted publickey for core from 147.75.109.163 port 44188 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:35.940109 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:35.952738 systemd-logind[1485]: New session 25 of user core. Sep 9 04:04:35.963933 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 04:04:37.222931 sshd[6339]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:37.232625 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Sep 9 04:04:37.235418 systemd[1]: sshd@22-10.230.58.214:22-147.75.109.163:44188.service: Deactivated successfully. Sep 9 04:04:37.242798 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 04:04:37.246985 systemd-logind[1485]: Removed session 25. Sep 9 04:04:42.394907 systemd[1]: Started sshd@23-10.230.58.214:22-147.75.109.163:43436.service - OpenSSH per-connection server daemon (147.75.109.163:43436). Sep 9 04:04:43.414675 sshd[6367]: Accepted publickey for core from 147.75.109.163 port 43436 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:43.419110 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:43.440771 systemd-logind[1485]: New session 26 of user core. Sep 9 04:04:43.449658 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 04:04:44.501548 sshd[6367]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:44.513066 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Sep 9 04:04:44.516687 systemd[1]: sshd@23-10.230.58.214:22-147.75.109.163:43436.service: Deactivated successfully. Sep 9 04:04:44.522025 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 04:04:44.528804 systemd-logind[1485]: Removed session 26. Sep 9 04:04:49.682220 systemd[1]: Started sshd@24-10.230.58.214:22-147.75.109.163:43446.service - OpenSSH per-connection server daemon (147.75.109.163:43446). Sep 9 04:04:50.773112 sshd[6430]: Accepted publickey for core from 147.75.109.163 port 43446 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 04:04:50.781428 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:04:50.796156 systemd-logind[1485]: New session 27 of user core. Sep 9 04:04:50.804107 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 04:04:52.057297 sshd[6430]: pam_unix(sshd:session): session closed for user core Sep 9 04:04:52.066002 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Sep 9 04:04:52.066856 systemd[1]: sshd@24-10.230.58.214:22-147.75.109.163:43446.service: Deactivated successfully. Sep 9 04:04:52.072574 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 04:04:52.074959 systemd-logind[1485]: Removed session 27.